Understanding Convolution & Cross-Correlation
This interactive visualization helps you understand how convolution and cross-correlation algorithms work, both in their naive implementations and with tiled optimizations for better cache performance.
Select a dimension from the menu above to start exploring!
1D Visualization
Naive Implementation
Tiled Implementation
Kernel
Naive Output
Tiled Output
How It Works
In 1D convolution, the kernel is flipped and then used to compute a weighted sum at each position of the input array.
The tiled approach divides the input into smaller chunks to improve cache utilization.
2D Visualization
Naive Implementation
Tiled Implementation
Kernel
Naive Output
Tiled Output
How It Works
In 2D convolution, the kernel is flipped both horizontally and vertically, then used to compute a weighted sum at each position of the input matrix.
The tiled approach processes the image in small rectangular regions to maximize cache efficiency.
3D Visualization
Naive Implementation
Tiled Implementation
Kernel
Naive Output
Tiled Output
How It Works
In 3D convolution, the kernel is flipped in all three dimensions before being applied to the input volume.
The tiled implementation processes the data in 3D blocks to improve cache locality and performance.
About This Visualizer
This visualization tool was created to help understand the implementation details of convolution and cross-correlation algorithms and their optimized tiled versions.
The visualizations show:
- How the naive implementations work by directly computing each output element
- How tiled implementations improve cache performance by processing data in chunks
- The differences between convolution (where the kernel is flipped) and cross-correlation
Source code and implementation details for these algorithms can be found in the TileAlgorithms GitHub repository.