Categories
Blog

Linear Regression using PyTorch

Regression involves trying to predict results within a continuous output, meaning that we try to map input variables to some continuous function. In linear regression, this continuous function is a straight line. For example, the cost of an ice-cream could have the following linear equation:

ice_cream_price = w1*cost_of_ingredients + w2*temperature + w3*rent_of_shop + ... 

We train our model in order to learn the best possible values for weights in this equation, i.e w1, w2 etc. This is done using a technique called gradient descent.

Problem Statement:

Let’s create a model that predicts crop yields for apples and oranges (target variables) by looking at the average temperature, the rainfall, and humidity (input variables or features) in a region. Given training data:

In a linear regression model, each target variable is estimated to be a weighted sum of the input variables, offset by some constant, known as a bias :

yeild_apple  = w11 * temp + w12 * rainfall + w13 * humidity + b1yeild_orange = w21 * temp + w22 * rainfall + w23 * humidity + b2

Visually, it means that the yield of apples is a linear or planar function of the temperature, rainfall & humidity.

Our objective is to find a good prediction for the weights and biases using our training data.

Training Data Representation

We can represent our training data using two numpy arrays (inputs and targets) , each with one row per observation and one column per variable. We can then convert these arrays into PyTorch tensors. A tensor is just is a multi-dimensional matrix containing elements of a single data type.

Linear Regression Model from scratch

Before using PyTorch built-ins, let’s create a model from scratch to get a better understanding of the underlying process.

To do this, we first initialise our weight and bias matrices using random numbers. We then define our linear regression model, generate predictions using our weights and bias and compute the amount of loss caused by the generated predictions. We use the Mean Squared Error to compute loss.

We reduce loss and improve our model using the gradient descent algorithm which has the following steps:

  1. Generate predictions
  2. Calculate the loss
  3. Compute gradients w.r.t the weights and biases
  4. Adjust the weights by subtracting a small quantity proportional to the gradient
  5. Reset the gradients to zero

To reduce the loss further, we repeat the process of adjusting weights and biases using the gradients multiple times. Each iteration is called an epoch

Linear Regression using PyTorch built-ins

PyTorch provides the elegantly designed modules and classes torch.nn Dataset , and DataLoader to help us create and train neural networks.

TensorDataset : PyTorch’s TensorDataset is a Dataset wrapping tensors. By defining a length and way of indexing, this also gives us a way to iterate, index, and slice along the first dimension of a tensor. This will make it easier to access both the independent and dependent variables in the same line as we train.

DataLoader: Pytorch’s DataLoader is responsible for managing batches. You can create a DataLoader from any Dataset. DataLoader makes it easier to iterate over batches.

Model: Instead of initializing the weights & biases manually, we can define the model using torch.nn.Linear

Optimizer: We can make use of the built Stochastic Gradient Descent optimizer ( torch.optim.SGD)

Using these modules and classes, our previous code can be simplified to:

And we’re done! Make sure you try out the Kaggle Boston Housing Challenge to put your linear regression skills to practice.

Leave a Reply

Your email address will not be published. Required fields are marked *