Thursday, March 19, 2015

Dumb Luck and Religious Violence

It's easy to see why people can say, that yeah, religion is at fault for this stuff that such and such people say and do and cite religion as their motivation and point out chapter and verse that says they should. But, really, you can do the same with the same religion to do the opposite in just about every case. It largely depends on the culture, and the culture is fed by and feeds religion and the culture.

It's basically chaotic. Run the world over again and you might have suicide bombers accepting principles of Crusade (citing chapter and verse) from the intellectual backwater of Europe locked never ending dark age.

Religion isn't very helpful, but it's hardly the root of all evil. They are all equally based on flawed epistemology, arguing that one is worse than another just because a lot of those who accept that religion do horrible things and there's horrible cultural trends within that cultural tradition, that are largely absent in our own, doesn't really work that way. It just happens that most of the Muslim world is now an intellectual backwater with completely unmodern and violent cultural outcroppings.

And yeah, we likely need to keep such people away from jet airplanes and nuclear weapons. But, it's really not intrinsically worse than the western traditional cultures, it just that western culture went towards science and progress and modernity and dragged Christianity along for the ride. It's not superiority, it's basically dumb luck. The religions are all just as false as one another. And claiming that Christianity is better than Islam because Christians don't blow themselves up or Christians invented telescopes, is really just saying X is better than Y because of things that are basically dumb luck. Paganism isn't true because they invented science and math and philosophy, it's again just dumb luck.

Thursday, February 19, 2015

Combining Convolution Kernels

So not only can you, if you get rid of the idea of the center of a convolution kernel and always write the results into the corner. Allowing you to perform the operation in the same memory space, you current reside in. But, you can also COMBINE the convolution kernels before hand.

For example:

private static final int[][] twoboxblurs = new int[][]{
        { 1,2,3,2,1 },
        { 2,4,6,4,2 },
        { 3,6,9,6,3 },
        { 2,4,6,4,2 },
        { 1,2,3,2,1 }
Applies the same thing as two box blurs (assuming there was no dividing error for the first time). It's the number of times each of those pixels down and to the right plays a role in the result. So that's the combination of:




Since each pixel would add a pixel each time to all the pixels down and to the right. You can get the sum of the them combined and properly expand out the kernel. Since you're not looking for data you won't already have you can do such a thing.

You can use the source code here:

For a convolution that applies without memory foot print. It also means you can just link the results together. The pixel 2 over and 1 down, needs 1 copy of the pixel 2 over and 1 down from it. Recursive call all those pixels and you're good.

This is why doing a blur and then an emboss is different than emboss then blur. Because image convolution kernels are not associative. The problem here is that the way convolution is done is wrong. I was busy trying to figure out clever ways to do a convolution of an image in the same space as the image memory and came across a solution. That you can always do a convolution in the same memory footprint if and only if, the "center pixel" is located in the upper left corner. And then you iterate right to left, top to bottom. So long as you're not requiring non-overwritten data from the area above or to your left, this is fine.

Now, how this gets to your problem. Removing the dependency on the previous pixels means that you can not only do it in the same memory but merge kernels. If you use convolution with the results pixel located in the upper left, then it is the case that pixel X doesn't need the location of that pixel, any pixels that are going to be used for the convolution are located down and to the right, so it is the case that you can create the correct answer by performing both kernels at the same time and putting them into that results pixel. Which can be done by way of a kernel.

The kernel you would use is simply these operations combined. So blur:


combined with blur:


It is assume all points not used has a zero. And in each of those 9 spaces you do another copy of the kernel. And add up the different parts needed by a multiple in that section.

As there is no longer dependency on previous pixels.

The problem isn't with the kernels but with how it's always implemented where you place the results in the middle of the field. I'm not sure why this is historically but really it would be easier to just shift the pixels over and down by one if it was extremely required, after the fact.

You will however lose the rounding bias and fail to compound it. value / 81 rather than (value / 9) / 9 could make your resulting matrix slightly more correct than it would have otherwise been.

Update: Eh, not really that cool. You can totally get the same answer faster doing it with the smallest kernels you have. 9 + 9 < 25. So it's still faster to blur than sharpen.

Olsen Noise 2D Java Code Updated. //commented to hell and back.

I went about rewriting the code, I was always annoyed by how much memory the sucker would grab up when it didn't really need to do that. I fixed all those issues. Figured out a better way to do matrix convolutions (for basically all images, all convolutions). But should hopefully have that sped up. And able to work without tossing around giant memory blocks. It sits in it's own foot print, even when it does operations like blur with a convolution kernel.

To use the class I did:
        on = new OlsenNoise2D(); //really all the functions can be static.
        int rh = on.getRequiredDim(height);
        stride = on.getRequiredDim(width);
        pixels = new int[stride * rh];
        on.olsennoise(pixels, stride, x, y, width, height);
        BufferedImage bi = new BufferedImage(width, height, BufferedImage.TYPE_INT_ARGB);
        bi.setRGB(0, 0, width, height, pixels, 0, stride);

Update: Use to have source here just use the pastebin.

Wednesday, February 18, 2015

Fast Biased Convolution Algorithm

Ever try to speed up convolution algorithms but then realize they either need to be given to a GPU and most of your lost time is all that giant second copy of all that memory allocation and deallocation? Do you not mind if your values shift up and to the left by half the matrix width and height? Then I have an algorithm for you!

I'm hoping to tweak it to add a field called bias, which will allow you to choose the direction  of the bias. So long as the algorithm iterates the field diagonally it is always the case that it can safely perform the convolution of the data and stick the answer to that in the corner that is never going to be used again.

So the entire thing seems to be pointless. If you put your matrix result point at the corner of the kernel, you can do the convolution with just a scanline. Literally a J/K loop, and never allocate another big block of memory.

Why didn't anybody point this out before. All the convolutions kicking around are mostly pointless. They are obsessed enough with keeping the pixel location consistent that they insist on odd ranged kernels and leave garbage at the edges (or more typically just don't apply it there). When really you could get the result in the same memory.

Sunday, February 1, 2015

On consumers creating jobs

It doesn't matter how risky the risk the guy who raises the capital makes, the risk is whether or not what he's selling the consumer will buy.

This is like insisting that that captain is the one most important person because he's steering the ship, because after all he's the one who would go down with the ship if it sinks. --- But, the consumers in this, aren't other members of the crew. They are the motherfucking ocean.

Friday, January 16, 2015

How does the Mind Works? --- Easy.

"What we haven't been able to figure out are the proper algorithms to make it happen and there is a huge ongoing research effort at the moment to reverse engineer the brain to figure out what kind of algorithms it employs."

I figured this out a number of years back. It's an evolutionary algorithm. In fact, understanding that evolution is the mechanism by which intelligence works also explains a lot of distinct phenomenon like why human brains recognize intelligence in clearly evolved things like trees and such. A well done, open ended algorithm, slated with predicting future events given very sparse data would necessarily develop an embodied reality simulation, and allow us to understand things like what we mean to understand things. We have an accurate model by which we can predict the activities of such elements. And to borrow terminology from other known evolutionary algorithms like science (amazing how children learning and science seem very similar), consciousness is like having a theory of self much like it's accurate to say children have a theory of mind. And as far as useful organs go, predicting the future is basically worth dedicating a massive amount of resources towards, it's just that doing that requires one to accurate predict what's going on during the present. And it does this by testing the world with the senses, rather than brains processing the senses it's more akin to the brain predicts what should be happening and the sense check whether that's consistent with their perceptions, so from basically no information we evolve an understanding of the world such that our eyes, that cannot see what we see, can tell us whether or not our impressions of what's going on are right or not. -- One will find under this model basically every observation about brains and consciousness are explained.

An explanation for why Jesus adds to 74:

Wiser cutey diagnose likely faithing bullheaded dumbos error, godson balderdash causing dignify diction baloney; simple flukes between objects implode brainy organs.

Jesus adds up to 74. This is amazing because...

Quick script and...


Sunday, January 4, 2015

Finetuning the universe by luck, because there is no God.

It turns out that the Wall Street Journal published some religious guy's article claiming to be all sciencey. And I've had to use my go to explanation of the Fine Tuning Argument and it's flawed a couple times. So since I put my stuff here for boilerplate purposes. Here's why it's actually an argument against God.

The Fine Tuning Argument's biggest flaw is that it's generally looking at the math wrong. The proper four question to any such thing are:

1) Given the universe as we know it, what are the odds it got this way given atheism?
2) Given the universe as we know it, what are the odds that it got this way given theism?
3) How many other ways could the universe be if atheism is true?
4) How many other ways could the universe be if theism is true?

Those are the proper values for a Baysian analysis. And interestingly enough the argument for Fine Tuning is almost always limited to saying the odds for #1 are vanishingly small! But, that doesn't actually seem to be true. However, the more serious problem with the argument is that #3 is basically very few. And #4 is pretty much every single universe imaginable if God could just magick it into being functional.

The universe is vast, like hugely hugely vast. And old, very very old. If it were the case that, there was no God, the only way to get life like ours would be random chance followed by evolution. And for that to work we'd need a vast universe with lots of chemicals and eons of time to mix them randomly to churn up something that could replicate. And against all odds, we have that. -- If God existed, he wouldn't have to make a universe that looks exactly like one that should exist if atheism were true, he could literally just make one star and one planet, or one planet that is made magically warm enough to allow things to work out, etc. Only atheism needs billions and billions of years and 100 trillion planets. Theism could poof anything as a solution, and certainly would have no need to make the universe look exactly as the universe would need to be if there were no God.

Contrary to how the argument is often offered, the Fine-Tuning argument if properly looked at under a Bayesian lens, is a fantastic argument for atheism.

Thursday, December 18, 2014

What are the odds 7 dice roll a higher number than 8 dice?

7 dice wins 129007254816 vs. 8 dice... 316648728585

129007254816 rolls of the 470184984576 rolls. Or 27.437553101%

    static final int SIDES = 6;
    static final int MINROLL = 1;
    public long combinatoricaldice(int d0, int d1) {    
        int[] dist0 = convolvedistribution(d0);
        int[] dist1 = convolvedistribution(d1);
        long wins = combdice(dist0,dist1);
        System.out.println(d0 + " dice wins " + wins + " vs. " + d1 + " dice.");
        return wins;
    public int[] convolvedistribution(int dice) {
        int[] roll = new int[((SIDES -1 + MINROLL) * dice) + 1];
        for (int i = 0; i < SIDES; i++) {
            roll[i + MINROLL] = 1;
        return convolvedistribution(roll,dice-1, new int[roll.length]);
    public int[] convolvedistribution(int[] roll, int dice, int[] temp) {
        if (dice == 0) return roll;
        Arrays.fill(temp, 0);
        for (int i = 0, s = roll.length; i < s; i++) {
            for (int q = i + MINROLL, m = i + SIDES + MINROLL; q < m && q < s; q++) {
                temp[q] += roll[i];
        System.arraycopy(temp, 0, roll, 0, temp.length);
        return convolvedistribution(roll, dice-1, temp);
    public long combdice(int[] sumdistribution0, int[] sumdistribution1) {
        long winning = 0;
        for (int m = 0; m < sumdistribution0.length; m++) {
            long s0 = (long) sumdistribution0[m];
            long winsagainst =;
            winning += (s0 * winsagainst);
        return winning;

    public boolean incrementDice(int[] dice) {
        for (int m = 0; m < dice.length; m++) {
            if ((dice[m] - MINROLL) + 1 < SIDES) {
                for (m = m - 1; m >= 0; m--) {
                    dice[m] = MINROLL;
                return true;
        return false;
    public int[] sumdistribution(int d0) {
        int[] dice = new int[d0];
        Arrays.fill(dice, MINROLL);
        int[] dist = new int[((SIDES - 1 + MINROLL)*d0) + 1];
        do {
            int sum0 =;
        } while (incrementDice(dice));
        return dist;

Friday, November 28, 2014

Conway's BufferedImageOp of Life

    protected class ConwayImageOp implements BufferedImageOp {

        public BufferedImage filter(BufferedImage bi, BufferedImage bout) {
            int width = bi.getWidth();
            int height = bi.getHeight();
            int[] rgbArray = new int[bi.getWidth() * bi.getHeight()];
            int[] modArray = new int[bi.getWidth() * bi.getHeight()];
            rgbArray = bi.getRGB(0, 0, bi.getWidth(), bi.getHeight(), rgbArray, 0, bi.getWidth());
            int peer;
            int x, y, xr, yr;
            int[][] karray = {
                {-1, -1}, {-1, 0}, {-1, 1},
                {0,  -1},           {0, 1},
                {1, -1},  {1, 0},   {1,  1}

            for (int index = 0, mi = rgbArray.length; index < mi; index++) {                
                int[] counts = new int[24];
                x = index % bi.getWidth();
                y = index / bi.getWidth();
                for (int m = 0, q = karray.length; m < q; m++) {
                    if (karray[m] != null) {
                        xr = x + karray[m][0];
                        yr = y + karray[m][1];

                        if ((karray[m] != null) && (((xr >= 0) && (yr >= 0) && (xr < width) && (yr < height)))) {
                            peer = rgbArray[(yr * width) + xr];
                            for (int i = 0; i < 24; i++) {
                                if (((peer >> i) & 1) == 1) counts[i]++;

                int current = rgbArray[index];
                int conway = 0;
                for (int pix = 0; pix < 24; pix++) {
                    conway |=  (Conway(((current >> pix) & 1), counts[pix]) << pix);
                modArray[index] = conway | 0xFF000000;
            bout.setRGB(0, 0, bout.getWidth(), bout.getHeight(), modArray, 0, bout.getWidth());
            return bout;
        int Conway(int current, int sum) {
            if (sum == 3) return 1;
            if ((current == 1) && (sum == 2)) return 1;
            return 0;

        public Rectangle2D getBounds2D(BufferedImage bi) {
            return null;

        public BufferedImage createCompatibleDestImage(BufferedImage bi, ColorModel cm) {
            return new BufferedImage(bi.getWidth(), bi.getHeight(), BufferedImage.TYPE_INT_ARGB);

        public Point2D getPoint2D(Point2D pd, Point2D pd1) {
            return null;

        public RenderingHints getRenderingHints() {
            return null;

Sunday, November 16, 2014

Conway's Game of Life In 24 Dimensions.

Smart Sharpen

Smart Sharpen, threshold color sharpen operation. In case you want to avoid mussing up the edges in an image, but you want to make it *really* hard to compress for basically no reason.

Preserves the important bits, causes the unimportant bits to be oversharpened.


Friday, October 10, 2014

On Islam and Culture

Too many people think religion is the core of their identity and they grossly underestimate the impact of culture. As an American atheist, I have more in common with American Christians and American Muslims than even British atheists or British Christians or British Muslims. Put a few representative peoples in a room and see how long it takes to identify them. The American would spot the American long before any religions matter. -- Yeah, there's a region in the world where Jihad is a cultural thing and yes it's fed by the religion but the idea that I should fear American Muslims is absurd.

Muslim is a much smaller cultural difference than American is. We should not fear them because they are American. There's a region of the world where FGM is practiced and it's largely from the local culture. While the religion tends to defend it with various parts of the Hadith. And it ends up in like Indonesia and Malaysia as a purely Islamic rite, it's still all culture (of which religion is a component) but it's a much smaller component than American Christians would like to believe. Christianity isn't all liberal and reformed, it's just culturally we are all children of the enlightenment.

It's pretty easy to envision a world where the roles are reversed. Where America was completely Muslim and took basically none of the Koran seriously but embraced enlightenment values and the Middle East was entirely Christian executing people for apostasy and stoning women to death in honor killings, with large members of ignorant fundamentalist embracing Crusade.

Friday, September 19, 2014

Olsen Noise Source Code in Java.

I have it autonormalizing. Because Olsen Noise doesn't artifact it's plenty doable to have the noise routine, fall into a specific range across all pixels by design. In this case it will always be between 0 and 255. Due to the random additions being at the exact bit level of the maxiterations and the max iterations at 7. Due to the blur and random speckling again it will likely fall into a gaussian distribution at around 128.

The math at the beginning is kind of confusing what it's doing. I changed it from recursive to iterative. So to figure out the base case it needs to iterate down to the base case. There may however be a way to solve the recursive problem directly and figure out what the window size is any given iteration. But, for now it does (v/2)-1 for all the lows (applying floor due to integer division) and 1 -(-v/2)) for all the highs (that double negative is there to make it a ceiling op).

It creates too many int[][] objects. One for upsample, One for Blur, and One for trim, each of the 7 iterations. So 21 such objects. I think it can be done with 3.

Update: Blur object can be removed by subdividing the blur into length then width (and division). The two pass solution allows for the same int[][] object that stores the value for blurring to be the same object the blur is passed into.
2/15 Update: The trim isn't really even needed as if you properly use a 1d array to store it and skip around with a stride, you can avoid trimming as most images will accept that you can give them more pixels and tell them where they properly are in the image (thus doing the trim) 

One of the more important things about this is at some particular iteration it can't know from its current position where it is actually located in the iteration above it. It actually needs to calculate the x0,y0 and x1,y1 positions and compare that to the current location Scaled and Translated into where it would normally be and use that to calculate the offset it's what  this: xoff = -2*((nx/2)) + nx + 1; actually does. It figures out from the next x, whether when the data is moved into the trimmed matrix should it be shifted somewhere. Without these calculations, you'll still get noise, it just won't be stable.

I should update or make a new javascript demo in both 2 and 3d. It would be hard to have it melt through time to show the 3D version off. 

Wrapping should be possible. It could be done at Q*(2^I) in any With iterations of 7 that's 128 and any multiples of 128.  Upsample would be unaffected by the random additions would need to know to wrap and the the blur would need to know that it wraps around the edge rather than truncates the edges.

Update: -- Originally had source code pasted here. It exists in the slashed out paste bin if you need it. But, It's old see --

Thursday, September 11, 2014

Absolute Certainty.

Ignorance of Ignorance is the only thing in absolute certainty.

Monday, September 8, 2014

Joss Wheton's Firefly, "Objects in Space" as a modern chiasmus.

The Firefly episode "Objects in Space" is a chaismus. It follows an overt chaistic structure.  Also called an inclusio or a sometimes a Markian sandwich in the Gospel of Mark.

Today most of our examples follow short literary phrases. But, having a total chaistic structure in a large work was not uncommon. The Gospel or Mark apparently has one. But, they are rare today, and we don't teach it much as a literary form.

They are where each part of the beginning parallels the end, with the second part paralleling the penultimate part. And the third part paralleling the ante-penultimate part. It has modernly been ignored, outside of short literary phrases and not as an entire works of media as a sort of meaning palindrome. But, "Objects in Space" certainly fits pattern. Which is a rarity.

While some of the links might be a byproduct of simply trying to mirror the characters of River and Early as being both alike in their in their form but different in their intents and desires. Looking at the characters at their literal object self, you still get a sort of crazy psychic assassin created by the Alliance with preternatural shooting abilities faster than the crew.

Structure of the Episode

  • Objects floating in space.
    • Crew interactions (disjointed)
      • Early Descent
        • Discussion about River.
          • Kaylee is scared.
            • Early boards Serenity.
              • Wanders the ship / deals with crew.
            • River becomes Serenity.
          • Kaylee is brave.
        • Discussion about Early.
      • River Descent
    • Crew interactions (united).
  • Objects floating in space.

Saturday, August 9, 2014

Some site I used to reference a Richard Carrier answer from seems gone (Tapee3i)

So I loaded up the way back machine and snagged the interview.

I had cited it in my life changes through various media thing before.

I totally wouldn't post it here, but I'm worried it might vanish forever.

tabee3i a home for Metaphysical Naturalists
By: Enki, November 5th, 2009
Richard Carrier Richard Carrier is a world-recognized atheist philosopher, teacher, and historian. He holds a Ph.D in Greco-Roman intellectual history from Columbia University. He is best-known as the author of Sense and Goodness without God: A Defense of Metaphysical Naturalism, and for his writings in the Secular Web (also known as the Internet Infidel) where he stayed the editor-in-chief for several years (now emeritus). He is a major contributor to The Empty Tomb and was also featured in the documentary film: The God Who Wasn't There. Dr. Carrier has published many articles in books, magazines and journals and made many appearances across the US and on national television defending sound historical methods and the ethical worldview of secular naturalism.

I have contacted Dr. Carrier and asked him about Metaphysical Naturalism, Christianity, atheism in the Middle East, his political opinions, and personal life.

1- First, let me start by thanking you again for your time. Looking at the various definitions of 'nature' or 'natural' that Keith Augustine has discussed in his thesis "A Defense of Naturalism", I would love to hear your version of the definition.

I discuss this very thoroughly, with entertaining examples, here: Defining the Supernatural I also have a forthcoming paper in Free Inquiry on the very issue of defining naturalism (perhaps next year, it's been languishing in their queue for years already, title "On Defining Naturalism as a Worldview," by last report will appear in the April/May issue of 2010, but it's been bumped before and may again).

2- One of atheism's strengths is being the default position in which it's not a claim but rather a response to a claim. Do you think this strength might get weakened as metaphysical naturalism is not only an assertion about what exists but it goes beyond that to a worldview?

I see it as entirely the other way around: mere atheism is the weaker position.

First, you can't go through life without a complete worldview, so in actual fact you have one whether you know it or not (unless you are insane, although often even then), so if you try to go around like a mere atheist, you are de facto going around with a completely unexamined, ill-tested, un-thought-out worldview, which you might not even be aware of even though you rely on it daily. On the one hand, Christians can take advantage of this fact. If they have thought their worldview through better than you have, they can easily expose the failures of yours, which leads to a serious weakness in mere atheism (as I'll explain in a moment). On the other hand, it's just dumb. You shouldn't be going around with a completely unexamined, ill- tested, un-thought-out worldview. Even if there were no religions. Thus, I say, stop doing that and start examining, testing, and thinking out your worldview, instead of pretending you don't have one.

I think the fear is that having a worldview commitment is equated with dogmatism and certainty, which is a fallacy. You can have a tentative worldview, with various components in various stages of uncertainty, and often revise your worldview without embarrassment (scientists do it all the time), even rest from time to time on unresolved sets of options at some points, but you still must have (and do have, whether you know it or not) some idea of the hierarchy of probabilities and possibilities. Even if one element of your worldview is highly uncertain, you are epistemically obligated to make sure it's still the most probable element of all known alternatives. Likewise, if you are unsure between, say, three different ways to answer a question, and so go around assuming any one of them may be correct, you are still epistemically obligated to make sure these options are not only the most probable of all known options but that they are equally probable to each other, otherwise you should be leaning in the direction of the most probable one, to some degree at least. If you do not do this, you will succumb to the folly of assuming all possible answers to a question are equally probable, which is not only nuts, it's a fallacy Christians routinely exploit.

Second, the modern Christian apologetic amounts to this: we have better explanations of all the so-far scientifically unexplained phenomena of the world than you do, therefore it is irrational not to see our worldview as presently the most probably correct. Taking a position of mere atheism is not only of no use against that apologetic, it's actually immediately defeated by it. There is only one way to validly respond to it. You have to prove the central premise false: they do not have better explanations of all the so- far scientifically unexplained phenomena of the world than we do. You can do this by agnostically articulating several equally good explanations, but at some point that just becomes pedantic and naive, because if you really did it competently, you'd realize even those "equally good" explanations, all of them, are defeated by an explanation that is in fact better. Thus, agnosticism is defeated by naturalism. Therefore it is agnosticism (and equivalently weak atheism) that is the weaker argument, not the other way around. And just as naturalism defeats agnosticism, it also a fortiori defeats Christianity by using their own apologetic against them: no, sir, in point of fact we have better explanations of all the so-far scientifically unexplained phenomena of the world than you do, therefore it is irrational not to see our worldview as presently the most probably correct.

I think the common mistake is to assume that claiming this is equivalent to declaring dogmatic certainty in naturalism. But that's the same fallacy I pointed out above. Saying naturalism is the most probably correct worldview on present evidence (and IMO, it is so by a large margin, no other competitor even comes close, a fact that isn't always obvious to those not well informed of the actual facts) merely means it is more probable than alternatives, not that it is itself decisively or undeniably certain. "More probable" does not mean "100%," or even "80%." It just means more. If the next most probable worldview is 20% probable, naturalism need only be 55% likely to be vastly more credible. I'm just making up numbers. But you see my point. Showing that we have better explanations for each peculiar fact is enough to refute Christianity. We need not assert that those explanations are therefore true, only that of all explanations so far conceived, those are far more likely to be correct than any others. That may change tomorrow as new information comes, showing some other explanation even more credible still. But right now, we ought to believe what the evidence makes most likely. And once you realize that naturalism has a better explanation of everything than Christianity, you'll realize it has a better explanation of everything than any other worldview. Which leads to only one rational conclusion: we all should be naturalists. At least for now. Maybe future evidence will change our minds, but we have to go on what we know now. Leave the future for later.

Monday, August 4, 2014

New Theme.

The black was a drag. So I posted very happy rainy day theme instead.

3D Olsen Noise

So I made a newer noise algorithm beyond fractal diamond squared noise. I previously removed the limitations on size and the memory issues, allowing proper paging.

Now I got rid of the diamond squared bit, and the artifacts it produces. As well as allowed the algorithm to be quickly expanded into multiple dimensions.

Basically rather than double the size, apply the diamond elements, apply the square elements.

You upsample increasing the size of each pixel to 2x in each dimension. Add noise to every pixel reducing it depending on the iteration iteration (my current reduction is Noise / (iteration + 1)). Apply a box blur (though any blur would work).  And it's all done in my infinite field scoping scheme, wherein the base case is pure randomness, and each later case is recursively scoped.

Update 9/22: The Java Source Code and Demo have Noise reduction of +bit@iteration. So Iteration 7 flags the 7th bit, so +128 or +0. 6th bit, +64, +0. -- Doing this allows it to skip normalization as the end result will *always* fall between 255 & 0.

No more artifacts, and the algorithm would quickly implement on GPU hardware, doesn't change at N-dimensions.

Update: While the algorithm was actually made with GPU hardware in mind, and would very quickly implement exactly as diamond squared would not. -- It does change at N-dimensions. In that more of the roughness flows into the additional dimensions. Rather than average over 9 somewhat random pixels at a given level it will be the average of 27. Each level meaning it will be much closer to the mean. You might still get desired effects by reducing the number of iterations. 

I've also confirmed that a 2d slice of 3d noise is the same as a 2d bit of noise. Since it's fractal this should be expected. I don't think you can, do things like turbulence bump-mapping like with simplex noise, because the absolute value of Olsen Noise, is pretty much again just fractal noise. Fractals are fun like that.

Update: It's this fact about Olsen Noise that initially lead to my false confirmation of the noise. If you normalize it, regardless whether it's excessively smooth or not. It will look like identical noise. If you want to go that route, then the noise won't change at 2d to 3d. Because the narrower ranged 3d noise will be zoomed in on, and give the same appearance of roughness.

And since the noise is scoping, you can map it out in N-dimensions. So not only could you make it go around corners without hard edges, like this paper is so happy with itself for doing. You simply go from wanting a 1x500x500 slice at 0,0 to wanting a 500x1x500 slice at 0,500. It would by definition be seamless.

And unlike other noise algorithms its' fast and *simple*. In fact, it's a number of simplifications of diamond-squared noise all rolled up in an infinite package (which is itself a simplified bit).

One can reduce the iterations with distance, far enough away from you, you have 4 block sections, which are the same as the close bits but dropping an iteration.

Update: Reducing the iteration in the demo can be seen as sampling at 2x2 the value. It's basically the same cost. You don't need to do the full size and reduce, you can just request the area scaled down by 2x2 at 1 fewer iterations.

Sampled at 1:5

Sampled at 1:20


If it were mission critical to have the noise wrap like old diamond squared noise, this could be done if the deterministic hash of the x,y,z...,iteration was taken as the x MOD wrap, y MOD wrap, z MOD wrap with regard to iteration  you would likely need to scope the wrapping. So if you wanted it to wrap with iterations equal to 7 (all my given examples here are iterations of 7), and wrap at 100. Your deterministic random hash function to give you random offsets modded at 100. Then at the call for iteration 6 have your deterministic random hash function give you random offsets looped at 51. And this would be independent of your requested set of pixels. It would do the recursive scope to make sure the the random variables given sync up. But, you could do awesome things like wrap at a specific (and different, x, y, and z). So you could make noise that wraps horizontally at 1000 but vertically at 50. In theory. I haven't managed to get it to work and there could be some kind of desyncing that happens when one iteration is looping at 16 and the next at 31. It might require a multiple of 2 for the wrapping. Or even a 2^(max iteration) wrapping or nothing at all.

Wrapping is left to later. I'll settle for better than everything else.

Smoothness is mostly a product of the number of iterations along with the drop off rate of the randomness.

Update: Algorithm Outline.
It occurs to me that I should have provided some pseudocode.

getTerrain(x0,y0,x1,y1,iterations) {
    if (iterations == 0) return maxtrixOf(random numbers);
    map = getTerrain(floor(x0/2) - 1, floor(y0/2) - 1, ceiling(x1/2), ceiling(y1/2), iterations-1);
    make a newmap twice as large.
    upsample map into newmap
    apply blur to newmap.
    add deterministic random offset to all values in newmap (decreasing each iterator)
    return requested area from within newmap. (This is typically newmap from [1,(n-1)] [1,(n-1])

Update: Actual Java Source Code.

Update: Demo.

Update: 3D Noise Source Code, With Commenting.