Wednesday, February 18, 2015

Fast Biased Convolution Algorithm

Ever try to speed up convolution algorithms but then realize they either need to be given to a GPU and most of your lost time is all that giant second copy of all that memory allocation and deallocation? Do you not mind if your values shift up and to the left by half the matrix width and height? Then I have an algorithm for you!

http://pastebin.com/1i0vwJgv

I'm hoping to tweak it to add a field called bias, which will allow you to choose the direction  of the bias. So long as the algorithm iterates the field diagonally it is always the case that it can safely perform the convolution of the data and stick the answer to that in the corner that is never going to be used again.

Update:
So the entire thing seems to be pointless. If you put your matrix result point at the corner of the kernel, you can do the convolution with just a scanline. Literally a J/K loop, and never allocate another big block of memory.

http://pastebin.com/bk0A2Z5D

Why didn't anybody point this out before. All the convolutions kicking around are mostly pointless. They are obsessed enough with keeping the pixel location consistent that they insist on odd ranged kernels and leave garbage at the edges (or more typically just don't apply it there). When really you could get the result in the same memory.

No comments: