Monday, August 4, 2014

3D Olsen Noise

Java Source Code: http://godsnotwheregodsnot.blogspot.com/2015/02/olsen-noise-2d-java-code-updated.html


So I made a newer noise algorithm beyond fractal diamond squared noise. I previously removed the limitations on size and the memory issues, allowing proper paging.

Now I got rid of the diamond squared bit, and the artifacts it produces. As well as allowed the algorithm to be quickly expanded into multiple dimensions.

http://www.gfycat.com/SecondhandDimpledGull

Basically rather than double the size, apply the diamond elements, apply the square elements.

You upsample increasing the size of each pixel to 2x in each dimension. Add noise to every pixel reducing it depending on the iteration iteration (my current reduction is Noise / (iteration + 1)). Apply a box blur (though any blur would work).  And it's all done in my infinite field scoping scheme, wherein the base case is pure randomness, and each later case is recursively scoped.

Update 9/22: The Java Source Code and Demo have Noise reduction of +bit@iteration. So Iteration 7 flags the 7th bit, so +128 or +0. 6th bit, +64, +0. -- Doing this allows it to skip normalization as the end result will *always* fall between 255 & 0.

No more artifacts, and the algorithm would quickly implement on GPU hardware, doesn't change at N-dimensions.

Update: While the algorithm was actually made with GPU hardware in mind, and would very quickly implement exactly as diamond squared would not. -- It does change at N-dimensions. In that more of the roughness flows into the additional dimensions. Rather than average over 9 somewhat random pixels at a given level it will be the average of 27. Each level meaning it will be much closer to the mean. You might still get desired effects by reducing the number of iterations. 

I've also confirmed that a 2d slice of 3d noise is the same as a 2d bit of noise. Since it's fractal this should be expected. I don't think you can, do things like turbulence bump-mapping like with simplex noise, because the absolute value of Olsen Noise, is pretty much again just fractal noise. Fractals are fun like that.

Update: It's this fact about Olsen Noise that initially lead to my false confirmation of the noise. If you normalize it, regardless whether it's excessively smooth or not. It will look like identical noise. If you want to go that route, then the noise won't change at 2d to 3d. Because the narrower ranged 3d noise will be zoomed in on, and give the same appearance of roughness.

And since the noise is scoping, you can map it out in N-dimensions. So not only could you make it go around corners without hard edges, like this paper is so happy with itself for doing. You simply go from wanting a 1x500x500 slice at 0,0 to wanting a 500x1x500 slice at 0,500. It would by definition be seamless.



And unlike other noise algorithms its' fast and *simple*. In fact, it's a number of simplifications of diamond-squared noise all rolled up in an infinite package (which is itself a simplified bit).


One can reduce the iterations with distance, far enough away from you, you have 4 block sections, which are the same as the close bits but dropping an iteration.

Update: Reducing the iteration in the demo can be seen as sampling at 2x2 the value. It's basically the same cost. You don't need to do the full size and reduce, you can just request the area scaled down by 2x2 at 1 fewer iterations.

Sampled at 1:5
http://www.gfycat.com/ImpressionableShinyFlounder

Sampled at 1:20
http://www.gfycat.com/KnobbyBlissfulFairyfly


Wrapping:

If it were mission critical to have the noise wrap like old diamond squared noise, this could be done if the deterministic hash of the x,y,z...,iteration was taken as the x MOD wrap, y MOD wrap, z MOD wrap with regard to iteration  you would likely need to scope the wrapping. So if you wanted it to wrap with iterations equal to 7 (all my given examples here are iterations of 7), and wrap at 100. Your deterministic random hash function to give you random offsets modded at 100. Then at the call for iteration 6 have your deterministic random hash function give you random offsets looped at 51. And this would be independent of your requested set of pixels. It would do the recursive scope to make sure the the random variables given sync up. But, you could do awesome things like wrap at a specific (and different, x, y, and z). So you could make noise that wraps horizontally at 1000 but vertically at 50. In theory. I haven't managed to get it to work and there could be some kind of desyncing that happens when one iteration is looping at 16 and the next at 31. It might require a multiple of 2 for the wrapping. Or even a 2^(max iteration) wrapping or nothing at all.

Wrapping is left to later. I'll settle for better than everything else.

Smoothness:
Smoothness is mostly a product of the number of iterations along with the drop off rate of the randomness.

http://www.gfycat.com/LateRevolvingBlackmamba

Update: Algorithm Outline.
It occurs to me that I should have provided some pseudocode.


getTerrain(x0,y0,x1,y1,iterations) {
    if (iterations == 0) return maxtrixOf(random numbers);
    map = getTerrain(floor(x0/2) - 1, floor(y0/2) - 1, ceiling(x1/2), ceiling(y1/2), iterations-1);
    make a newmap twice as large.
    upsample map into newmap
    apply blur to newmap.
    add deterministic random offset to all values in newmap (decreasing each iterator)
    return requested area from within newmap. (This is typically newmap from [1,(n-1)] [1,(n-1])
}

Update: Actual Java Source Code.
http://godsnotwheregodsnot.blogspot.com/2014/09/olsen-noise-source-code-in-java.html


Update: Demo.
http://tatarize.nfshost.com/OlsenNoise.htm

Update: 3D Noise Source Code, With Commenting.
http://pastebin.com/WJVyDxDR

37 comments:

Dampha said...
This comment has been removed by the author.
Dampha said...
This comment has been removed by the author.
Dampha said...

I took the image from the Demo and I got this image using Blender http://imgur.com/f2aanDf. I'm not sure what causes the edge problem.

Also, Is http://pastebin.com/gh6P5zf3 the same version in the Demo? I would love to utilize this in https://www.youtube.com/watch?v=m48OTA5qtRM&list=UUTObP1VzcIglm7uTgUBQjawv as a method to create infinite terrain along with STB_Perlin?

Dampha said...
This comment has been removed by the author.
Dampha said...

I made this but I think I'm misreading or understanding your code that create the first field-hash.

http://pastebin.com/5P5fDZzu

Tatarize said...

I'm not sure what you mean by edge problem. The edges look rightly just as noisy as the rest of it.

Dampha said...

First I'll try to convert Olsen2D to C++.

Tatarize said...

I don't foresee any problems there. The Java is pretty close to C++ already. And I know it works fine.

If you're confused about what's going on, I posted the source for the 3d noise. It's basically the same algorithm but it's more properly divided up into what it's actually doing and gives explanations of what's going on.

It's using recursion to get the smallest needed slice of infinite fields, to solve for some requested slice of the infinite field at the requested number of iterations. And applying noise, upsample and blur on the results (where the amount of noise drops off at the higher levels, as in Diamond Squared).

Tatarize said...

Or not quite.

nxh = x + width;
nyh = y + width;

That second one should be +height
Else it fails save when it's a square.

Tatarize said...

Also, come to think of it, I didn't bother to break it down to the much faster way of doing things instead it makes many many new int[][] arrays.

I'm planning on optimizing it I suppose, writing an app for it for android. And kind of need to to scroll in real time full screen. Though if you just need to generate landscape it'll do that in a snap. Though at that rate it is setup to work with GPU math.

Dampha said...

Yea. If that's possible. That would be a time saver. This is write I'm getting just up to the upsample part taking away the 2 dimensional array usage but doing so I'm getting memory problems because there is so many variations of sizes going between upsizing and downsizing, and memory pointing.

I'm going still try to do it but it's going take me a while.

Dampha said...
This comment has been removed by the author.
Dampha said...

This is the code so far I converted to C++ (Olsen 2D)
http://pastebin.com/qbgBL0Hq

This is the image produce
http://imgur.com/i6Lbr1P

It still have some ways to go but maybe you'll notice the problem.

Dampha said...

The picture from the current one is http://imgur.com/nb2pMjP

Tatarize said...

I can see some places where it bleeds and the numbers clearly are overflowing. Like the white goes from 0b11111111 to 0b100000000 which is basically 0b000000000 as far as the sample is concerned. And it really looks like half way down the damned thing loses a few bytes and suddenly it's filling in the scan lines with a bias. Like it's 1000x1000 for a while, then half way down it suddenly thinks it's only 993 stride and the pixel it takes to be 1 down from it is really 1 down and 7 to the left.

Tatarize said...

Also, and less obviously, the top half has a patterned sweep. Subtly a few layers down even in the top half there's clearly a down right sweep. It's properly fractal noise. Which means there are no patterns. Any pattern is a bug.

Try jiggering with the number of iterations. It's recursive. So if you request 0 iterations it should give you real pure random noise. At 1 it should be a little less random but still very-all-over the place, and so on. Apparently some of it is working which obscures any good understanding of the bugs.

One thing about fractal noise is bugged fractal noise very often looks like fractal noise.

Dampha said...

I posted a new version so far. I think it's 80 out of 100 functions.

http://pastebin.com/qKJxwSL6

A image of the generated photo is http://imgur.com/w8BDW1P

at iteration 7

Tatarize said...

That's so weird. Clearly something is adding something to the top half. At iteration 7 it shouldn't need normalization. But clearly many of those parts are so white they are black. Which is to say they try to shove nine bits of whiteness into an 8 bit number and get 0.

I'm torn between checking through your code and figuring out an easier way to transcode it. For the most part where the scope is at one location getting it to line up at the next is hard.

Though for some reason your code looks like iteration 6 + iteration 7. Like the previous iteration with the previous smaller width is being added on the the last iteration in that same matrix. Giving the top part of the image numbers that are too high, with a scanline that is less than the actual width (causing the directional flow).

Dampha said...
This comment has been removed by the author.
Dampha said...

I agree. It's hardly noticeable in 2 iterations but the higher it goes. It shows dominately.

I also notice the closer it gets to the bit highest position then reverts to black like you mentioned. How I handled it in other code was not to use the bitwise operation but a floating value or integer value added that is scaled to a value 0 to 1(or whatever range) in the final result.

I'm going take a break and re-read the code. I was thinking maybe it was a problem with memory overwrite but that shouldn't cause a problem.

Dampha said...

Heres the current code. I changed memcpy to memmove just in case. Additionally, Ichanged a few lines to better match the originals.

The iteration is set to 4 where it's noticeable of the problem. I'll take a look later. It does look like the iterations are getting mixed together.

Tatarize said...

The likely error is that when it rounds one way rather than the other it takes an improper scope and grabs garbage rather than safe data. Each time it scopes through the iteration it properly has a set amount of data at the core which is guaranteed to be valid. If one iteration rounds down the edge of the scope rather than up or some such thing, it'll eat in some garbage at the edge that most of that overly complicated stuff is attempting to accomplish. To that end I rewrote the way convolutions work so that I can now convolve any square kernel (and all kernels can be made square) into the same memory as the pixels are stored. Biasing the center pixel currently to the upper left.

http://pastebin.com/1i0vwJgv

It's basically a way to perform any convolution without allocating more memory. And the good data will always start at the given x,y location. It also doesn't use any multiplication in a core loop but that's neither here nor there.

Tatarize said...

As for your particular bug, try moving the location, request the same amount of pixels but move it over by a couple thousand or so, perhaps move it over by a prime number. It's very likely the scoping causing your issue. And I don't want anybody else running into such problems when they implement it, so I'm hoping I can simply it greatly (also I need to not allocate more memory because on android that's death) and I want a demonstration app.

Dampha said...

I changed how the matrix blur, upsampled, and trim was assigned to field. It was definitely a memory issue.

The working code is http://pastebin.com/iZUPJThc

The image gallery of it is http://imgur.com/a/ei4dF

The only thing I noticed two of the edges changes drastically creating a weird distortion. So maybe, modify the code to add a extra XY (image + 1).

Then clip the extra edges in the final copy.

Tatarize said...

The code should have a trim, built in there already. The problem at the edges is garbage is seeping in at a lower level. Like it's rounding down when it shouldn't and failing to capture the full extent. I've been working on coding up a new version that requires no memory allocation beyond the initial and just that one matrix. It's not done yet, but it's getting there.

http://pastebin.com/vE079xQb

It'll give noise right off, though it'll be blue, as I didn't grey scale it and it'll ask for way too much data at every level than it needs. And without changing the position of the scope it won't move right.

Speaking of if your stuff is giving you some garbage at the edge it's likely going to move right. If you call two different locations that overlap like 0,0,1000,1000 and 0,500,1000,1000 they won't match right.

Most of the trouble in the damned thing is asking the next iteration for the correct data in the correct location. (Only thing I didn't do in my latest thus far). Typically because of the reduction by half each time you end up asking for a 4x4 grid at the lowest level each time.

Tatarize said...

Bit closer, but starting all good pixels at 0,0 and having all the operations maintain that seems like it should work fine. Might have it done before you wake.

http://pastebin.com/8D5TQp2B

Dampha said...

I'll be on the look out for it. Is the pastebin the final version?

Tatarize said...

I gotta figure out a proper way to track that stuff. Damned changes to the scope over time is so hard to track.

http://pastebin.com/BdVY9wXN

Weird way of being broken though.

Dampha said...

I agree.

How converted most of the code to the C++ equivalent from the previous post?

The result image is http://imgur.com/9eEdk6e

Since converting array usage to memory pointer is harder. Simply anything reference to [x][y]. I have to change to x+(y*width). I have to find all the places I need to change the y part to y*width.

Hmmm.


I http://imgur.com/9eEdk6e

Dampha said...
This comment has been removed by the author.
Dampha said...

I tested the original code that was made into c++. Good news. http://imgur.com/a/p42gb

It looks like it works like a charm and would possible work with Urho3D. I haven't merged it into the code. I want to see you STB Perlin produces it results plus I'm going dump the diamond method.

You can see some of what I'm doing from the Demo video.

Tatarize said...

Well, I made it actually run really fast. And cut out basically everything unneeded. I had to change entire idea of how convolutions work (people have been doing they basically wrong, or rather have a lot of options they never considered).

http://pastebin.com/fy7fV16C

It doesn't allocate any memory to run beyond the initial pixels so it should in theory port over with way less of a headache. I've also commented it well enough to see what's going on.

Dampha said...

This is what I have so farr.

The applynoise worked after I set index+k to index+(k*width)

I'm not sure about ApplyBlur, ApplyScale, etc. I don't think it factoring - width of a line like I had to factor index+k to index+(k*width).

In C++. It might be different in java..

This is the output I got from the source http://imgur.com/gB0skz9

Tatarize said...

Looks like adding the noise directly on to the end of the image. It should be by default overwriting the data in the array so far, or ignoring it as garbage. More important than width, as such is the stride. The amount of pixels to the next scanline.

Lemme mark up the entire code so it's really obvious. It honestly should have basically worked out of the box.

Tatarize said...

http://pastebin.com/MRH7xEYB

Commented to hell and gone. Gives details on every line and what it does.

If I had to guess it's that damned matrix in the convolve blur.

The value there is suppose to be index +k as K is the value within the X and index is the sum of the strides.

If you want to calculate it wasting a multiplication. You do:

index_position = x_iterator + (stride * y_iterator)


The internal functions have widths that are almost never equal to the actual scanline. This is why it typically adds or subtracts stride in all of those for loops.

Tatarize said...

There is a small bug in the base case. The applyNoise adds the noise in that level. But, if the array is reused it'll still have some trash there. It should do an Arrays.fill(pixels,0); or in C++ iterate the array and clear them out. Or modify that step to not use the function. It should set them to actual random values of either 0 or 128. But, kind of leaves some garbage in there as those bits aren't destroyed during the blur etc stages in the base case as it doesn't have them.

http://pastebin.com/AFSnaJnk

I'll update my post accordingly.

Unknown said...
This comment has been removed by a blog administrator.