Tuesday, June 27, 2017

Approximating an arc with cubic bezier curves, yet another metric.

C = 0.552

Really,
C = 0.55201372171

The idea is that rather than minimize the extrema of the error we can instead minimize the total error. Graph out formula for the error and calculate what would be under the curve as a whole. So rather than positive and negative error equal it would seek to minimize the error overall. It's not too different from the other metrics. But, rather than trying to keep the first pixel from being wrong, it tries to keep the pixels as a whole from being wrong for as long as possible. It's different than the number for the extrema because oddly the graph is symmetric at 0.5. It's not symmetric with regard to the y axis, and has different amounts areas of positive and negative and the error shifts as the value of c shifts.

Mortensen's value. Even extrema point.
0.55191502449 at 0.0000001 increments
Total error:1180.57375326880434046054832219597498652621901536765190
Samples: 10000001

My calculated value, brute force.
0.55201372171 at 0.0000001 increments
Total error:1159.83397426356807198675826572857280613256846239728729
Samples: 10000001

Naive geometric value, with purely positive error ((4/3)*(sqrt(2) - 1).
0.552284749 at 0.0000001 increments
Total error:1401.62730758375303300973494715872257045283590056376636
Samples: 10000001

As we can see the total error over a million samples gives us an improvement of 20. Compared to the naive value for all error being positive we gain 242 which is better than Mortensen's gain of 222.

We're talking literally fractions of percents here, but this number has another advantage. You can call it .552 which is a much shorter fraction. Besides the more slices I do the more I narrow in on the value, but it's still a bit off. I'm pretty sure on most of those digits but without actually doing the calculus I can't get much better than that.

0.552 at 0.0000001 increments
Total error:1160.26180861006145200701189677079769078925976995284068
Samples 10000001

You'll notice my value is only 0.4279 different than that in total error over a million samples. And since that truncation lowers it, it will only make it a bit closer to the even extrema point, which is a fine metric.

P_0 = (0,1), P_1 = (c,1), P_2 = (1,c), P_3 = (1,0)
P_0 = (1,0), P_1 = (1,-c), P_2 = (c,-1), P_3 = (0,-1)
P_0 = (0,-1), P_1 = (-c,-1), P_2 = (-1,-c), P_3 = (-1,0)
P_0 = (-1,0), P_1 = (-1,c), P_2 = (-c,1), P_3 = (0,1)
with c = 0.552


Under this metric the cubic value something like: 0.92103099 which might be more important because really flattening out the error might matter in that case a lot more than in the cubic case. I'll call it 0.921

No comments: