Discussion:
[SciPy-dev] scipy.signal.convolve2d significantly slower than matlab
Joel Schaerer
2008-06-16 14:47:27 UTC
Permalink
Hi all,

I've found that the 2D convolution code in scipy is significantly slower than
matlab's. The following code:

--------------------------------------------------------------------
#!/usr/bin/python

import scipy.io as io
import scipy.signal

aa=io.loadmat('input.mat')
seq=aa["seq"]

seq_f=[]

kernel=scipy.randn(21,21)
for k in xrange(seq.shape[2]):
seq_f.append(scipy.signal.convolve2d(seq[:,:,k],kernel,'same'))
--------------------------------------------------------------------

Executes about 8 times slower than the following matlab code (executed with
matlab7 on 32bit linux):

--------------------------------------------------------------------
clear all
close all
clc

load input.mat

g=randn(21,21);

seq_f=zeros(size(seq));
for p=1:size(seq,3),
seq_f(:,:,p)=conv2(seq(:,:,p),g,'same');
end
--------------------------------------------------------------------

I've traced the convolve code in scipy to the pylab_convolve2D function in
firfilter.c. It contains the nice following warning:

/* This could definitely be more optimized... */

...

Anyways, convolution is crucial for some applications,
so I wanted to report this. I also tried to look for
some good open source convolution code so that it
might be incorporated into scipy, but couldn't find any.
Would anyone know of one?

Totally unrelated question to the scipy devs: do you plan
to add a hierarchical data structure such as kd-trees to
scipy? With a nice API, it would be a great feature.
Tom Waite
2008-06-16 22:40:43 UTC
Permalink
Do you require the filter kernel to be non-separable for your application?
Post by Joel Schaerer
Hi all,
I've found that the 2D convolution code in scipy is significantly slower than
--------------------------------------------------------------------
#!/usr/bin/python
import scipy.io as io
import scipy.signal
aa=io.loadmat('input.mat')
seq=aa["seq"]
seq_f=[]
kernel=scipy.randn(21,21)
seq_f.append(scipy.signal.convolve2d(seq[:,:,k],kernel,'same'))
--------------------------------------------------------------------
Executes about 8 times slower than the following matlab code (executed with
--------------------------------------------------------------------
clear all
close all
clc
load input.mat
g=randn(21,21);
seq_f=zeros(size(seq));
for p=1:size(seq,3),
seq_f(:,:,p)=conv2(seq(:,:,p),g,'same');
end
--------------------------------------------------------------------
I've traced the convolve code in scipy to the pylab_convolve2D function in
/* This could definitely be more optimized... */
...
Anyways, convolution is crucial for some applications,
so I wanted to report this. I also tried to look for
some good open source convolution code so that it
might be incorporated into scipy, but couldn't find any.
Would anyone know of one?
Totally unrelated question to the scipy devs: do you plan
to add a hierarchical data structure such as kd-trees to
scipy? With a nice API, it would be a great feature.
_______________________________________________
Scipy-dev mailing list
http://projects.scipy.org/mailman/listinfo/scipy-dev
Stéfan van der Walt
2008-06-17 09:27:39 UTC
Permalink
Hi Joel
Post by Joel Schaerer
I've found that the 2D convolution code in scipy is significantly slower than
Also take a look at fftconvolve.

Regards
Stéfan
Tom Waite
2008-06-17 14:53:42 UTC
Permalink
In addition to fftconvole, if the filter kernel is separable (Gaussian,
Sinc, difference-of-sinc), you can use the scipy.ndimage correlate1d.
Correlate1D is very fast and checks for kernel symmetry so can reduce the
number of mults. In your application script you are building a stack of
filtered images. I wrote a registration package in ndimage that has a 3D
separable filter for volume filtering prior to registration. I am about to
check in changes to the code that will include a test that shows how to get
both 3D and a stack of 2D filtered slices (which is an intermediate result
of the 3D). I typically do 3D filtering of 256x256x90 volumes and it is
under 1 second. I agree the convole2D needs to be improved for speed.
Post by Stéfan van der Walt
Hi Joel
Post by Joel Schaerer
I've found that the 2D convolution code in scipy is significantly slower
than
Also take a look at fftconvolve.
Regards
Stéfan
_______________________________________________
Scipy-dev mailing list
http://projects.scipy.org/mailman/listinfo/scipy-dev
Loading...