Over the past few weeks we’ve been studying the fundamental building blocks of machine learning and neural network classifiers.
We started with an introduction to linear classification that discussed the concept of parameterized learning, and how this type of learning enables us to define a scoring function that maps our input data to output class labels.
This scoring function is defined in terms of parameters; specifically, our weight matrix W and our bias vector b. Our scoring function accepts these parameters as inputs and returns a predicted class label for each input data point .
From there, we discussed two common loss functions: Multi-class SVM loss and cross-entropy loss (commonly referred to in the same breath as “Softmax classifiers”). Loss functions, at the most basic level, are used to quantify how “good” or “bad” a given predictor (i.e., a set of parameters) are at classifying the input data points in our dataset.
Given these building blocks, we can now move on to arguably the most important aspect of machine learning, neural networks, and deep learning — optimization.
Throughout this discussion we’ve learned that high classification accuracy is dependent on finding a set of weights W such that our data points are correctly classified. Given W, can compute our output class labels via our scoring function. And finally, we can determine how good/poor our classifications are given some W via our loss function.
But how do we go about finding and obtaining a weight matrix W that obtains high classification accuracy?
Do we randomly initialize W, evaluate, and repeat over and over again, hoping that at some point we land on a W that obtains reasonable classification accuracy?
Well we could — and it some cases that might work just fine.
But in most situations, we instead need to define an optimization algorithm that allows us to iteratively improve our weight matrix W.
In today’s blog post, we’ll be looking at arguably the most common algorithm used to find optimal values of W — gradient descent.
Looking for the source code to this post?
Jump right to the downloads section.
Gradient descent with Python
The gradient descent algorithm comes in two flavors:
- The standard “vanilla” implementation.
- The optimized “stochastic” version that is more commonly used.
Today well be reviewing the basic vanilla implementation to form a baseline for our understanding. Then next week I’ll be discussing the stochastic version of gradient descent.
Gradient descent is an optimization algorithm
The gradient descent method is an iterative optimization algorithm that operates over a loss landscape.
We can visualize our loss landscape as a bowl, similar to the one you may eat cereal or soup out of:
The surface of our bowl is called our loss landscape, which is essentially a plot of our loss function.
The difference between our loss landscape and your cereal bowl is that your cereal bowl only exists in three dimensions, while your loss landscape exists in many dimensions, perhaps tens, hundreds, or even thousands of dimensions.
Each position along the surface of the bowl corresponds to a particular loss value given our set of parameters, W (weight matrix) and b (bias vector).
Our goal is to try different values of W and b, evaluate their loss, and then take a step towards more optimal values that will (ideally) have lower loss.
Iteratively repeating this process will allow us to navigate our loss landscape, following the gradient of the loss function (the bowl), and find a set of parameters that have minimum loss and high classification accuracy.
The “gradient” in gradient descent
To make our explanation of gradient descent a little more intuitive, let’s pretend that we have a robot — let’s name him Chad:
We place Chad on a random position in our bowl (i.e., the loss landscape):
It’s now Chad’s job to navigate to the bottom of the basin (where there is minimum loss).
Seems easy enough, right? All Chad has to do is orient himself such that he’s facing “downhill” and then ride the slope until he reaches the bottom of the basin.
But we have a problem: Chad isn’t a very smart robot.
Chad only has one sensor — this sensor allows him to take his weight matrix W and compute a loss function L.
Therefore, Chad is able to compute his (relative) position on the loss landscape, but he has absolutely no idea in which direction he should take a step to move himself closer to the bottom of the basin.
What is Chad to do?
The answer is to apply gradient descent.
All we need to do is follow the slope of the gradient W. We can compute the gradient of W across all dimensions using the following equation:
In > 1 dimensions, our gradient becomes a vector of partial derivatives.
The problem with this equation is that:
- It’s an approximation to the gradient.
- It’s very slow.
In practice, we use the analytic gradient instead. This method is exact, fast, but extremely challenging to implement due to partial derivatives and multivariable calculus. You can read more about the numeric and analytic gradients here.
For the sake of this discussion, simply try to internalize what gradient descent is doing: attempting to optimize our parameters for low loss and high classification accuracy.
Pseudocode for gradient descent
Below I have included some Python-like pseudocode of the standard, vanilla gradient descent algorithm, inspired by the CS231n slides:
while True: Wgradient = evaluate_gradient(loss, data, W) W += -alpha * Wgradient
This pseudocode is essentially what all variations of gradient descent are built off of.
We start off on Line 1 by looping until some condition is met. Normally this condition is either:
- A specified number of epochs has passed (meaning our learning algorithm has “seen” each of the training data points N times).
- Our loss has become sufficiently low or training accuracy satisfactorily high.
- Loss has not improved in M subsequent epochs.
Line 2 then calls a function named
evaluate_gradient. This function requires three parameters:
-
loss
: A function used to compute the loss over our current parameters W and inputdata
. -
data
: Our training data where each training sample is represented by a feature vector. -
W
: This is actually our weight matrix that we are optimizing over. Our goal is to apply gradient descent to find a W that yields minimal loss.
The
evaluate_gradientfunction returns a vector that is K-dimensional, where K is the number of dimensions in our feature vector. The
Wgradientvariable is actually our gradient, where we have a gradient entry for each dimension.
We then apply the actual gradient descent on Line 3.
We multiply our
Wgradientby
alpha, which is our learning rate. The learning rate controls the size of our step.
In practice, you’ll spend a lot of time finding an optimal learning rate
alpha— it is by far the most important parameter in your model.
If
alphais too large, we’ll end up spending all our time bouncing around our loss landscape and never actually “descending” to the bottom of our basin (unless our random bouncing takes us there by pure luck).
Conversely, if
alphais too small, then it will take many (perhaps prohibitively many) iterations to reach the bottom of the basin.
Because of this,
alphawill cause you many headaches — and you’ll spend a considerable amount of your time trying to find an optimal value for your classifier and dataset.
Implementing gradient descent with Python
Now that we know the basics of gradient descent, let’s implement gradient descent in Python and use it to classify some data.
Open up a new file, name it
gradient_descent.py, and insert the following code:
# import the necessary packages import matplotlib.pyplot as plt from sklearn.datasets.samples_generator import make_blobs import numpy as np import argparse def sigmoid_activation(x): # compute and return the sigmoid activation value for a # given input value return 1.0 / (1 + np.exp(-x))
Lines 2-5 import our required Python packages.
We then define the
sigmoid_activationfunction on Line 7. When plotted, this function will resemble an “S”-shaped curve:
We call this an activation function because the function will “activate” and fire “ON” (output value >= 0.5) or “OFF” (output vale < 0.5) based on the inputs
x.
While there are other (better) alternatives to the sigmoid activation function, it makes for an excellent “starting point” in our discussion of machine learning, neural networks, and deep learning.
I’ll also be discussing activation functions in more detail in a future blog post, so for the time being, simply keep in mind that this is a non-linear activation function that we can use to “threshold” our predictions.
Next, let’s parse our command line arguments:
# import the necessary packages import matplotlib.pyplot as plt from sklearn.datasets.samples_generator import make_blobs import numpy as np import argparse def sigmoid_activation(x): # compute and return the sigmoid activation value for a # given input value return 1.0 / (1 + np.exp(-x)) # construct the argument parse and parse the arguments ap = argparse.ArgumentParser() ap.add_argument("-e", "--epochs", type=float, default=100, help="# of epochs") ap.add_argument("-a", "--alpha", type=float, default=0.01, help="learning rate") args = vars(ap.parse_args())
We can provide two (optional) command line arguments to our script:
-
--epochs
: The number of epochs that we’ll use when training our classifier using gradient descent. -
--alpha
: The learning rate for gradient descent. We typically see 0.1, 0.01, and 0.001 as initial learning rate values, but again, you’ll want to tune this hyperparameter for your own classification problems.
Now that our command line arguments are parsed, let’s generate some data to classify:
# import the necessary packages import matplotlib.pyplot as plt from sklearn.datasets.samples_generator import make_blobs import numpy as np import argparse def sigmoid_activation(x): # compute and return the sigmoid activation value for a # given input value return 1.0 / (1 + np.exp(-x)) # construct the argument parse and parse the arguments ap = argparse.ArgumentParser() ap.add_argument("-e", "--epochs", type=float, default=100, help="# of epochs") ap.add_argument("-a", "--alpha", type=float, default=0.01, help="learning rate") args = vars(ap.parse_args()) # generate a 2-class classification problem with 250 data points, # where each data point is a 2D feature vector (X, y) = make_blobs(n_samples=250, n_features=2, centers=2, cluster_std=1.05, random_state=20) # insert a column of 1's as the first entry in the feature # vector -- this is a little trick that allows us to treat # the bias as a trainable parameter *within* the weight matrix # rather than an entirely separate variable X = np.c_[np.ones((X.shape[0])), X] # initialize our weight matrix such it has the same number of # columns as our input features print("[INFO] starting training...") W = np.random.uniform(size=(X.shape[1],)) # initialize a list to store the loss value for each epoch lossHistory = []
On Line 22 we make a call to
make_blobswhich generates 250 data points. These data points are 2D, implying that the “feature vectors” are of length 2.
Furthermore, 125 of these data points belong to class 0 and the other 125 to class 1. Our goal is to train a classifier that correctly predicts each data point as being class 0 or class 1.
Line 29 applies a neat little trick that allows us to skip explicitly keeping track of our bias vector b. To accomplish this, we insert a brand new column of 1’s as the first entry in our feature vector. This addition of a column containing a constant value across all feature vectors allows us to treat our bias as a trainable parameter that is within the weight matrix W rather than an entirely separate variable. You can learn more about this trick here and here.
Line 34 (randomly) initializes our weight matrix such that it has the same number of dimensions as our input features.
It’s also common to see both zero and one weight initialization, but I tend to prefer random initialization better. Weight initialization methods will be discussed in further detail inside future neural network and deep learning blog posts.
Finally, Line 37 initializes a list to keep track of our loss after each epoch. At the end of our Python script, we’ll plot the loss which should ideally decrease over time.
All of our variables are now initialized, so we can move on to the actual training and gradient descent procedure:
# import the necessary packages import matplotlib.pyplot as plt from sklearn.datasets.samples_generator import make_blobs import numpy as np import argparse def sigmoid_activation(x): # compute and return the sigmoid activation value for a # given input value return 1.0 / (1 + np.exp(-x)) # construct the argument parse and parse the arguments ap = argparse.ArgumentParser() ap.add_argument("-e", "--epochs", type=float, default=100, help="# of epochs") ap.add_argument("-a", "--alpha", type=float, default=0.01, help="learning rate") args = vars(ap.parse_args()) # generate a 2-class classification problem with 250 data points, # where each data point is a 2D feature vector (X, y) = make_blobs(n_samples=250, n_features=2, centers=2, cluster_std=1.05, random_state=20) # insert a column of 1's as the first entry in the feature # vector -- this is a little trick that allows us to treat # the bias as a trainable parameter *within* the weight matrix # rather than an entirely separate variable X = np.c_[np.ones((X.shape[0])), X] # initialize our weight matrix such it has the same number of # columns as our input features print("[INFO] starting training...") W = np.random.uniform(size=(X.shape[1],)) # initialize a list to store the loss value for each epoch lossHistory = [] # loop over the desired number of epochs for epoch in np.arange(0, args["epochs"]): # take the dot product between our features `X` and the # weight matrix `W`, then pass this value through the # sigmoid activation function, thereby giving us our # predictions on the dataset preds = sigmoid_activation(X.dot(W)) # now that we have our predictions, we need to determine # our `error`, which is the difference between our predictions # and the true values error = preds - y # given our `error`, we can compute the total loss value as # the sum of squared loss -- ideally, our loss should # decrease as we continue training loss = np.sum(error ** 2) lossHistory.append(loss) print("[INFO] epoch #{}, loss={:.7f}".format(epoch + 1, loss))
On Line 40 we start looping over the supplied number of
--epochs. By default, we’ll allow our training procedure to “see” each of the training points a total of 100 times (thus, 100 epochs).
Line 45 takes the dot product between our entire training data
Xand our weight matrix
W. We take the output of this dot product and feed the values through the sigmoid activation function, giving us our predictions.
Given our predictions, the next step is to determine the “error” of the predictions, or more simply, the difference between our predictions and the true values (Line 50).
Line 55 computes the least squares error over our predictions (our loss value). The goal of this training procedure is thus to minimize the least squares error.
Now that we have our
error, we can compute the
gradientand then use it to update our weight matrix
W:
# import the necessary packages import matplotlib.pyplot as plt from sklearn.datasets.samples_generator import make_blobs import numpy as np import argparse def sigmoid_activation(x): # compute and return the sigmoid activation value for a # given input value return 1.0 / (1 + np.exp(-x)) # construct the argument parse and parse the arguments ap = argparse.ArgumentParser() ap.add_argument("-e", "--epochs", type=float, default=100, help="# of epochs") ap.add_argument("-a", "--alpha", type=float, default=0.01, help="learning rate") args = vars(ap.parse_args()) # generate a 2-class classification problem with 250 data points, # where each data point is a 2D feature vector (X, y) = make_blobs(n_samples=250, n_features=2, centers=2, cluster_std=1.05, random_state=20) # insert a column of 1's as the first entry in the feature # vector -- this is a little trick that allows us to treat # the bias as a trainable parameter *within* the weight matrix # rather than an entirely separate variable X = np.c_[np.ones((X.shape[0])), X] # initialize our weight matrix such it has the same number of # columns as our input features print("[INFO] starting training...") W = np.random.uniform(size=(X.shape[1],)) # initialize a list to store the loss value for each epoch lossHistory = [] # loop over the desired number of epochs for epoch in np.arange(0, args["epochs"]): # take the dot product between our features `X` and the # weight matrix `W`, then pass this value through the # sigmoid activation function, thereby giving us our # predictions on the dataset preds = sigmoid_activation(X.dot(W)) # now that we have our predictions, we need to determine # our `error`, which is the difference between our predictions # and the true values error = preds - y # given our `error`, we can compute the total loss value as # the sum of squared loss -- ideally, our loss should # decrease as we continue training loss = np.sum(error ** 2) lossHistory.append(loss) print("[INFO] epoch #{}, loss={:.7f}".format(epoch + 1, loss)) # the gradient update is therefore the dot product between # the transpose of `X` and our error, scaled by the total # number of data points in `X` gradient = X.T.dot(error) / X.shape[0] # in the update stage, all we need to do is nudge our weight # matrix in the negative direction of the gradient (hence the # term "gradient descent" by taking a small step towards a # set of "more optimal" parameters W += -args["alpha"] * gradient
Line 62 handles computing the actual gradient, which is the dot product between our data points
Xand the
error.
Line 68 is the most critical step in our algorithm and where the actual gradient descent takes place. Here we update our weight matrix
Wby taking a
--stepin the negative direction of the gradient, thereby allowing us to move towards the bottom of the basin of the loss landscape (hence the term, gradient descent).
After updating our weight matrix, we keep looping until the desired number of epochs has been met — gradient descent is thus an iterative algorithm.
To actually demonstrate how we can use our weight matrix W as a classifier, take a look at the following code block:
# import the necessary packages import matplotlib.pyplot as plt from sklearn.datasets.samples_generator import make_blobs import numpy as np import argparse def sigmoid_activation(x): # compute and return the sigmoid activation value for a # given input value return 1.0 / (1 + np.exp(-x)) # construct the argument parse and parse the arguments ap = argparse.ArgumentParser() ap.add_argument("-e", "--epochs", type=float, default=100, help="# of epochs") ap.add_argument("-a", "--alpha", type=float, default=0.01, help="learning rate") args = vars(ap.parse_args()) # generate a 2-class classification problem with 250 data points, # where each data point is a 2D feature vector (X, y) = make_blobs(n_samples=250, n_features=2, centers=2, cluster_std=1.05, random_state=20) # insert a column of 1's as the first entry in the feature # vector -- this is a little trick that allows us to treat # the bias as a trainable parameter *within* the weight matrix # rather than an entirely separate variable X = np.c_[np.ones((X.shape[0])), X] # initialize our weight matrix such it has the same number of # columns as our input features print("[INFO] starting training...") W = np.random.uniform(size=(X.shape[1],)) # initialize a list to store the loss value for each epoch lossHistory = [] # loop over the desired number of epochs for epoch in np.arange(0, args["epochs"]): # take the dot product between our features `X` and the # weight matrix `W`, then pass this value through the # sigmoid activation function, thereby giving us our # predictions on the dataset preds = sigmoid_activation(X.dot(W)) # now that we have our predictions, we need to determine # our `error`, which is the difference between our predictions # and the true values error = preds - y # given our `error`, we can compute the total loss value as # the sum of squared loss -- ideally, our loss should # decrease as we continue training loss = np.sum(error ** 2) lossHistory.append(loss) print("[INFO] epoch #{}, loss={:.7f}".format(epoch + 1, loss)) # the gradient update is therefore the dot product between # the transpose of `X` and our error, scaled by the total # number of data points in `X` gradient = X.T.dot(error) / X.shape[0] # in the update stage, all we need to do is nudge our weight # matrix in the opposite direction of the gradient (hence the # term "gradient descent" by taking a small step towards a # set of "more optimal" parameters W += -args["alpha"] * gradient # to demonstrate how to use our weight matrix as a classifier, # let's look over our a sample of training examples for i in np.random.choice(250, 10): # compute the prediction by taking the dot product of the # current feature vector with the weight matrix W, then # passing it through the sigmoid activation function activation = sigmoid_activation(X[i].dot(W)) # the sigmoid function is defined over the range y=[0, 1], # so we can use 0.5 as our threshold -- if `activation` is # below 0.5, it's class `0`; otherwise it's class `1` label = 0 if activation < 0.5 else 1 # show our output classification print("activation={:.4f}; predicted_label={}, true_label={}".format( activation, label, y[i]))
We start on on Line 72 by looping over a sample of our training data.
For each training point
X[i]we compute the dot product between
X[i]and the weight matrix
W, then feed the value through our activation function.
On Line 81, we compute the actual output class label. If the
activationis < 0.5, then the output is class 0; otherwise, the output is class 1.
Our last code block is used to plot our training data along with the decision boundary that is used to determine if a given data point is class 0 or class 1:
# import the necessary packages import matplotlib.pyplot as plt from sklearn.datasets.samples_generator import make_blobs import numpy as np import argparse def sigmoid_activation(x): # compute and return the sigmoid activation value for a # given input value return 1.0 / (1 + np.exp(-x)) # construct the argument parse and parse the arguments ap = argparse.ArgumentParser() ap.add_argument("-e", "--epochs", type=float, default=100, help="# of epochs") ap.add_argument("-a", "--alpha", type=float, default=0.01, help="learning rate") args = vars(ap.parse_args()) # generate a 2-class classification problem with 250 data points, # where each data point is a 2D feature vector (X, y) = make_blobs(n_samples=250, n_features=2, centers=2, cluster_std=1.05, random_state=20) # insert a column of 1's as the first entry in the feature # vector -- this is a little trick that allows us to treat # the bias as a trainable parameter *within* the weight matrix # rather than an entirely separate variable X = np.c_[np.ones((X.shape[0])), X] # initialize our weight matrix such it has the same number of # columns as our input features print("[INFO] starting training...") W = np.random.uniform(size=(X.shape[1],)) # initialize a list to store the loss value for each epoch lossHistory = [] # loop over the desired number of epochs for epoch in np.arange(0, args["epochs"]): # take the dot product between our features `X` and the # weight matrix `W`, then pass this value through the # sigmoid activation function, thereby giving us our # predictions on the dataset preds = sigmoid_activation(X.dot(W)) # now that we have our predictions, we need to determine # our `error`, which is the difference between our predictions # and the true values error = preds - y # given our `error`, we can compute the total loss value as # the sum of squared loss -- ideally, our loss should # decrease as we continue training loss = np.sum(error ** 2) lossHistory.append(loss) print("[INFO] epoch #{}, loss={:.7f}".format(epoch + 1, loss)) # the gradient update is therefore the dot product between # the transpose of `X` and our error, scaled by the total # number of data points in `X` gradient = X.T.dot(error) / X.shape[0] # in the update stage, all we need to do is nudge our weight # matrix in the opposite direction of the gradient (hence the # term "gradient descent" by taking a small step towards a # set of "more optimal" parameters W += -args["alpha"] * gradient # to demonstrate how to use our weight matrix as a classifier, # let's look over our a sample of training examples for i in np.random.choice(250, 10): # compute the prediction by taking the dot product of the # current feature vector with the weight matrix W, then # passing it through the sigmoid activation function activation = sigmoid_activation(X[i].dot(W)) # the sigmoid function is defined over the range y=[0, 1], # so we can use 0.5 as our threshold -- if `activation` is # below 0.5, it's class `0`; otherwise it's class `1` label = 0 if activation < 0.5 else 1 # show our output classification print("activation={:.4f}; predicted_label={}, true_label={}".format( activation, label, y[i])) # compute the line of best fit by setting the sigmoid function # to 0 and solving for X2 in terms of X1 Y = (-W[0] - (W[1] * X)) / W[2] # plot the original data along with our line of best fit plt.figure() plt.scatter(X[:, 1], X[:, 2], marker="o", c=y) plt.plot(X, Y, "r-") # construct a figure that plots the loss over time fig = plt.figure() plt.plot(np.arange(0, args["epochs"]), lossHistory) fig.suptitle("Training Loss") plt.xlabel("Epoch #") plt.ylabel("Loss") plt.show()
Visualizing gradient descent
To test our gradient descent classifier, be sure to download the source code using the “Downloads” section at the bottom of this tutorial.
From there, execute the following command:
$ python gradient_descent.py
Examining the output, you’ll notice that our classifier runs for a total of 100 epochs with the loss decreasing and classification accuracy increasing after each epoch:
To visualize this better, take a look at the plot below which demonstrates how our loss over time has decreased dramatically:
We can then see a plot of our training data points along with the decision boundary learned by our gradient descent classifier:
Notice how the decision boundary learned by our gradient descent classifier neatly divides data points of the two classes.
We then then manually investigate the classifications made by our gradient descent model. In each case, we are able to correctly predict the class:
To visualize and demonstrate gradient descent in action, I have created the following animation which shows the decision boundary being “learned” after each epoch:
As you can see, our decision boundary starts off widely inaccurate due to the random initialization. But as time passes, we are able to apply gradient descent, update our weight matrix W, and eventually learn an accurate model.
Want to learn more about gradient descent?
In next week’s blog post, I’ll be discussing a slight modification to gradient descent called Stochastic Gradient Descent (SGD).
In the meantime, if you want to learn more about gradient descent, you should absolutely refer to Andrew Ng’s gradient descent lesson in the Coursera Machine Learning course.
I would also recommend Andrej Karpathy’s excellent slides from the CS231n course.
Summary
In this blog post we learned about gradient descent, a first-order optimization algorithm that can be used to learn a set of parameters that will (ideally) obtain low loss and high classification accuracy on a given problem.
I then demonstrated how to implement a basic gradient descent algorithm using Python. Using this implementation, we were able to actually visualize how gradient descent can be used to learn and optimize our weight matrix W.
In next week’s blog post, I’ll be discussing a modification to the vanilla gradient descent implementation called Stochastic Gradient Descent (SGD). The SGD flavor of gradient descent is more commonly used than the one we introduced today, but I’ll save a more thorough discussion for next week.
See you then!
Before you go, be sure to use the form below to sign up for the PyImageSearch Newsletter — you’ll then be notified when future blog posts are published.
Downloads:
The post Gradient descent with Python appeared first on PyImageSearch.
from PyImageSearch http://ift.tt/2d7bexl
via IFTTT
No comments:
Post a Comment