Quantcast
Channel: Undocumented Matlab
Viewing all articles
Browse latest Browse all 219

Convolution performance

$
0
0

MathWorks’ latest MATLAB Digest (January 2016) featured my book “Accelerating MATLAB Performance“. I am deeply honored and appreciative of MathWorks for this.

Accelerating MATLAB Performance bookI would like to dedicate today’s post to a not-well-known performance trick from my book, that could significantly improve the speed when computing the convolution of two data arrays. Matlab’s internal implementation of convolution (conv, conv2 and convn) appears to rely on a sliding window approach, using implicit (internal) multithreading for speed.

However, this can often be sped up significantly if we use the Convolution Theorem, which states in essence that conv(a,b) = ifft(fft(a,N) .* fft(b,N)), an idea proposed by Bruno Luong. In the following usage example we need to remember to zero-pad the data to get comparable results:

% Prepare the input vectors (1M elements each)
x = rand(1e6,1);
y = rand(1e6,1);
 
% Compute the convolution using the builtin conv()
tic, z1 = conv(x,y); toc
 => Elapsed time is 360.521187 seconds.
 
% Now compute the convolution using fft/ifft: 780x faster!
n = length(x) + length(y) - 1;  % we need to zero-pad
tic, z2 = ifft(fft(x,n) .* fft(y,n)); toc
 => Elapsed time is 0.463169 seconds.
 
% Compare the relative accuracy (the results are nearly identical)
disp(max(abs(z1-z2)./abs(z1)))
 => 2.75200348450538e-10

This latest result shows that the results are nearly identical, up to a a very minute difference, which is certainly acceptable in most cases when considering the enormous performance speedup (780x in this specific case). Bruno’s implementation (convnfft) is made even more efficient by using MEX in-place data multiplications, power-of-2 FFTs, and use of GPU/Jacket where available.

It should be noted that the builtin Matlab functions can still be faster for relatively small data arrays, or if your machine has a large number of CPU cores and free memory that Matlab’s builtin conv* functions can utilize, and of course also depending on the Matlab release. So, your mileage might well vary. But given the significant speedup potential, I contend that you should give it a try and see how well it performs on your specific system and data.

If you have read my book, please be kind enough to post your feedback about it on Amazon (link), for the benefit of others. Thanks in advance!


Viewing all articles
Browse latest Browse all 219

Trending Articles