Thursday, February 19, 2015

Combining Convolution Kernels

So not only can you, if you get rid of the idea of the center of a convolution kernel and always write the results into the corner. Allowing you to perform the operation in the same memory space, you current reside in. But, you can also COMBINE the convolution kernels before hand.

For example:

private static final int[][] twoboxblurs = new int[][]{
        { 1,2,3,2,1 },
        { 2,4,6,4,2 },
        { 3,6,9,6,3 },
        { 2,4,6,4,2 },
        { 1,2,3,2,1 }
    };
 
Applies the same thing as two box blurs (assuming there was no dividing error for the first time). It's the number of times each of those pixels down and to the right plays a role in the result. So that's the combination of:

1,1,1
1,1,1
1,1,1

and

1,1,1
1,1,1
1,1,1

Since each pixel would add a pixel each time to all the pixels down and to the right. You can get the sum of the them combined and properly expand out the kernel. Since you're not looking for data you won't already have you can do such a thing.

You can use the source code here:
http://pastebin.com/bk0A2Z5D

For a convolution that applies without memory foot print. It also means you can just link the results together. The pixel 2 over and 1 down, needs 1 copy of the pixel 2 over and 1 down from it. Recursive call all those pixels and you're good.

This is why doing a blur and then an emboss is different than emboss then blur. The problem here is that the way convolution is done is wrong. I was busy trying to figure out clever ways to do a convolution of an image in the same space as the image memory and came across a solution. That you can always do a convolution in the same memory footprint if and only if, the "center pixel" is located in the upper left corner. And then you iterate right to left, top to bottom. So long as you're not requiring non-overwritten data from the area above or to your left, this is fine.

Now, how this gets to your problem. Removing the dependency on the previous pixels means that you can not only do it in the same memory but merge kernels. If you use convolution with the results pixel located in the upper left, then it is the case that pixel X doesn't need the location of that pixel, any pixels that are going to be used for the convolution are located down and to the right, so it is the case that you can create the correct answer by performing both kernels at the same time and putting them into that results pixel. Which can be done by way of a kernel.

The kernel you would use is simply these operations combined. So blur:

    1,1,1
    1,1,1
    1,1,1

combined with blur:

    1,2,3,2,1
    2,4,6,4,2
    3,6,9,6,3
    2,4,6,4,2
    1,2,3,2,1

It is assume all points not used has a zero. And in each of those 9 spaces you do another copy of the kernel. And add up the different parts needed by a multiple in that section.

As there is no longer dependency on previous pixels.

The problem isn't with the kernels but with how it's always implemented where you place the results in the middle of the field. I'm not sure why this is historically but really it would be easier to just shift the pixels over and down by one if it was extremely required, after the fact.

You will however lose the rounding bias and fail to compound it. value / 81 rather than (value / 9) / 9 could make your resulting matrix slightly more correct than it would have otherwise been.

Update: Eh, not really that cool. You can totally get the same answer faster doing it with the smallest kernels you have. 9 + 9 < 25. So it's still faster to blur than sharpen.

Olsen Noise 2D Java Code Updated.


http://pastebin.com/AFSnaJnk //commented to hell and back.

I went about rewriting the code, I was always annoyed by how much memory the sucker would grab up when it didn't really need to do that. I fixed all those issues. Figured out a better way to do matrix convolutions (for basically all images, all convolutions). But should hopefully have that sped up. And able to work without tossing around giant memory blocks. It sits in it's own foot print, even when it does operations like blur with a convolution kernel.

To use the class I did:
     
        on = new OlsenNoise2D(); //really all the functions can be static.
        int rh = on.getRequiredDim(height);
        stride = on.getRequiredDim(width);
        pixels = new int[stride * rh];
        on.olsennoise(pixels, stride, x, y, width, height);
        BufferedImage bi = new BufferedImage(width, height, BufferedImage.TYPE_INT_ARGB);
        bi.setRGB(0, 0, width, height, pixels, 0, stride);


Update: Use to have source here just use the pastebin.

Wednesday, February 18, 2015

Fast Biased Convolution Algorithm

Ever try to speed up convolution algorithms but then realize they either need to be given to a GPU and most of your lost time is all that giant second copy of all that memory allocation and deallocation? Do you not mind if your values shift up and to the left by half the matrix width and height? Then I have an algorithm for you!

http://pastebin.com/1i0vwJgv

I'm hoping to tweak it to add a field called bias, which will allow you to choose the direction  of the bias. So long as the algorithm iterates the field diagonally it is always the case that it can safely perform the convolution of the data and stick the answer to that in the corner that is never going to be used again.

Update:
So the entire thing seems to be pointless. If you put your matrix result point at the corner of the kernel, you can do the convolution with just a scanline. Literally a J/K loop, and never allocate another big block of memory.

http://pastebin.com/bk0A2Z5D

Why didn't anybody point this out before. All the convolutions kicking around are mostly pointless. They are obsessed enough with keeping the pixel location consistent that they insist on odd ranged kernels and leave garbage at the edges (or more typically just don't apply it there). When really you could get the result in the same memory.

Sunday, February 1, 2015

On consumers creating jobs

It doesn't matter how risky the risk the guy who raises the capital makes, the risk is whether or not what he's selling the consumer will buy.

This is like insisting that that captain is the one most important person because he's steering the ship, because after all he's the one who would go down with the ship if it sinks. --- But, the consumers in this, aren't other members of the crew. They are the motherfucking ocean.