How to define a simple Convolutional Neural Network in PyTorch?


To define a simple convolutional neural network (CNN), we could use the following steps −

Steps

  • First we import the important libraries and packages. We try to implement a simple CNN in PyTorch. In all the following examples, the required Python library is torch. Make sure you have already installed it.

import torch
import torch.nn as nn
import torch.nn.functional as F
  • Our next step is to build a simple CNN model. Here, we use the nn package to implement our model. For this, we define a class MyNet and pass nn.Module as the parameter.

class MyNet(nn.Module):
  • We need to create two functions inside the class to get our model ready. First is the init() and the second is the forward(). Within the init() function, we call a super() function and define different layers. We must add some convolutional layer to be classified as CNN.

  • We need to instantiate the class above defined to train the model on the dataset. When we instantiate the class, the forward() function is executed.

model = MyNet()
  • Print the model to see the different layers.

print(model)

Example 1

In the following program, we implement a simple Convolutional Neural Network. We added different layers such as Convolutional Layer, Max Pooling layer, and fully-connected (Linear) layer.

import torch
import torch.nn as nn
import torch.nn.functional as F

class MyNet(nn.Module):
   def __init__(self):
      super().__init__()
      self.conv1 = nn.Conv2d(3, 6, 5)
      self.pool = nn.MaxPool2d(2, 2)
      self.conv2 = nn.Conv2d(6, 16, 5)
      self.fc1 = nn.Linear(16 * 5 * 5, 120)
      self.fc2 = nn.Linear(120, 84)
      self.fc3 = nn.Linear(84, 10)

   def forward(self, x):
      x = self.pool(F.relu(self.conv1(x)))
      x = self.pool(F.relu(self.conv2(x)))
      x = torch.flatten(x, 1) # flatten all dimensions except batch
      x = F.relu(self.fc1(x))
      x = F.relu(self.fc2(x))
      x = self.fc3(x)
      return x
net = MyNet()
print(net)

Output

Net(
   (conv1): Conv2d(3, 6, kernel_size=(5, 5), stride=(1, 1))
   (pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
   (conv2): Conv2d(6, 16, kernel_size=(5, 5), stride=(1, 1))
   (fc1): Linear(in_features=400, out_features=120, bias=True)
   (fc2): Linear(in_features=120, out_features=84, bias=True)
   (fc3): Linear(in_features=84, out_features=10, bias=True)
)

Example 2

In the following program, we implement a simple Convolutional Neural Network. We added different layers such as Convolutional Layer, Max Pooling layer, and fully-connected (Linear) layer.

import torch
import torch.nn as nn
import torch.nn.functional as F
class Model(nn.Module):
   def __init__(self):
      super(Model, self).__init__()

      # define the layers
      self.conv1 = nn.Conv2d(3, 16, 3, padding=1)
      self.conv2 = nn.Conv2d(16, 32, 3, padding=1)
      self.conv3 = nn.Conv2d(32, 64, 3, padding=1)
      self.pool = nn.MaxPool2d(2, 2)
      self.linear1 = nn.Linear(64*4*4, 512)
      self.linear2 = nn.Linear(512, 10)

   def forward(self, x):
      x = self.pool(F.relu(self.conv1(x)))
      x = self.pool(F.relu(self.conv2(x)))
      x = self.pool(F.relu(self.conv3(x)))
      x = x.view(-1, 1024) ## reshaping
      x = F.relu(self.linear1(x))
      x = self.linear2(x)
      return x

model = Model()
print(model)

Output

Model(
   (conv1): Conv2d(3, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
   (conv2): Conv2d(16, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
   (conv3): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
   (pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
   (linear1): Linear(in_features=1024, out_features=512, bias=True)
   (linear2): Linear(in_features=512, out_features=10, bias=True)
)

Updated on: 25-Jan-2022

280 Views

Kickstart Your Career

Get certified by completing the course

Get Started
Advertisements