nn.Conv2d is a class in the PyTorch deep learning framework that represents a 2-dimensional convolutional layer. Convolutional layers are a fundamental building block in Convolutional Neural Networks (CNNs), which are widely used for image processing, computer vision, and other tasks involving grid-like input data.
nn.Conv2d class is part of the
torch.nn module, which provides a set of predefined layers, loss functions, and other components to build neural networks in PyTorch.
When you create an instance of
nn.Conv2d, you need to specify a few parameters:
in_channels: The number of input channels (e.g., 3 for a color image with RGB channels)
out_channels: The number of output channels, also known as the number of filters or feature maps in the layer
kernel_size: The size of the convolutional kernel (e.g., 3 for a 3×3 kernel)
stride: The step size used when moving the kernel across the input (default is 1)
padding: The amount of zero-padding added to the input before performing the convolution (default is 0)
dilation: The spacing between kernel values (default is 1)
groups: The number of input channels processed together as a group (default is 1)
bias: Whether or not to add a learnable bias to the output (default is True)
Here’s an example of how to create a 2D convolutional layer in PyTorch:
import torch.nn as nn conv_layer = nn.Conv2d(in_channels=3, out_channels=64, kernel_size=3, stride=1, padding=1)
This layer takes a 3-channel input (e.g., an RGB image) and produces 64 feature maps, using 3×3 convolutional kernels with a stride of 1 and padding of 1.