consistency and unbiasedness back
Board:
Home
Board index
Raytracing
General Development
(L) [2012/01/21] [ost
by outofspace] [consistency and unbiasedness] Wayback!Just wanted to repeat an argument that we ended up discussing off-topic in another thread:
Many in graphics seem to think that unbiasedness offers some great advantage over consistency.
Well, it's not really the case - let me explain why in very simple terms.
All that unbiased means in practice is that, thanks to the Central Limit Theorem,
an infinite sum of finite runs of an algorithm will converge to zero error.
(it also means that each run has an expected error of zero over infinitely many realizations,
but that is basically equivalent to the above, and in practical terms has no value at all since
we are always dealing with individual runs).
Which is pretty much the same definition of consistency: the limit value of the algorithm's
output has zero error.
So in the real world, all that matters is (a) having a convergent algorithm, and (b) being able to run the
algorithm indefinitely (e.g. the algorithm's resource usage shouldn't grow with time).
Once these two points are satisfied, the only important factor is convergence speed.
What unbiasedness often gives compared to pure consistency is just a mathematical way to
reason about the convergence speed in terms of probability theory - but if you can get that by
other means, there's no advantage at all.
Convergence speed is what dictates the robustness of any consistent algorithm, whether it's
unbiased or not.
p.s.
interestingly, some unbiased algorithms are not even consistent, as would be the case for a
basic Metropolis without restarts. So unbiased in a sense is less strong a property than consistent,
in terms of practical convergence properties.
(L) [2012/01/21] [ost
by ingenious] [consistency and unbiasedness] Wayback!Unbiased estimators still have a couple of advantages though. Importantly, you can compute N independent instances of them, and averaging these instances will reduce variance, i.e. the error, by a factor of N. This does not necessarily hold for biased estimators, which have another source of error apart from variance - bias. The second thing is that the only source of error in the estimate is variance, for which there exist standard tools to measure and analyze. In contrast, in general the error of a biased estimator depends on its definition and has to be analyzed individually. Another thing is that often the biased estimators used in image synthesis have lower order of convergence than their unbiased counterparts. But this doesn't necessarily mean that biased is always worse. For direct illumination, unbiased are better due to their higher order of convergence. For reflected caustics, on the other hand, the unbiased estimators have huge initial variance, and so it takes many many samples for the higher order of convergence to kick in and produce a better estimate than the photon mapping estimator - most often you simply don't want to wait that long. That's why an "unbiased" stamp on the product box means absolutely nothing. In fact, it can be interpreted as a negative quality.
(L) [2012/01/22] [ost
by outofspace] [consistency and unbiasedness] Wayback!Sure, as I said, unbiasedness gives the tools of probability theory to analyze convergence speed.
Unfortunately variance is the square of the error though, which varies with O(N^1/2) - not that good in practice. [SMILEY ;)]
And I find it interesting that to attain a desirable convergence speed people have gone through sampling methods
that were originally biased (i.e. deterministic low discrepancy sequences) and were later randomized.
On the other hand sometimes you might be able to characterize the convergence speed of bias as well,
but even in that case many people refrain from accepting consistent methods - with no seemingly rational
reason.
Not to mention that in light transport simulation so called "unbiased sampling methods" are only such
for a restricted set of inputs, and are in fact very biased in the presence of SDS paths, in the sense
they miss them entirely. [SMILEY ;)]
(L) [2012/01/22] [ost
by joedizzle] [consistency and unbiasedness] Wayback!@outofspace... Referring to light transport simulation, so called "unbiased sampling method", as biased because of its imperfection to SDS paths is quite vague. Biasness in this case comes from the scene not being "100% Physical Scene", this is a result of the virtual pinhole camera model contributing to the biasness. Does it mean modelling a physical camera model will still result for the SDS paths being missed entirely?  [SMILEY ;)]
(L) [2012/01/23] [ost
by outofspace] [consistency and unbiasedness] Wayback!The deeper problem is that you can construct scenes which are arbitrarily close to the SDS case,
and in the limit local path sampling techniques will fail.
This means that you can get arbitrarily close to failing, i.e. get arbitrarily high variance.
In reality, if you try to model a real camera (with a real aperture) and a real architectural scene,
with filament lamps, reflectors, glass panes, etc (or even the sun / moon) - any unbiased local
path sampling will result in an utter disaster, as the probabilities of hitting the camera and/or the
lights are just exceptionally small.
Metropolis and other MCMC samplers can improve things, but not quite enough.
The need for biased techniques is justified by pretty much everything you see out there, in daily life.
(L) [2012/01/25] [ost
by keldor314] [consistency and unbiasedness] Wayback!There's one very practical difference between unbiased and merely consistent renderers.  To see it, imagine that you have a render farm, and you'd like to use a large number of separate computers to render an image.  With an unbiased renderer, you can simply have them each render the image, but with fewer samples, and then take the resulting images and average them together to obtain a high quality image.  Contrast that to an unbiased renderer, where simply averaging a bunch of low quality frames does NOT result in a high quality frame.
Think of it this way - a typical consistent renderer combines a path tracing pass with some sort of filter to remove noise.  Basically a special image aware blur in some sense.  Now, imagine you average two of these images, each with the noise blurred out.  The result is an image just as blurry as either of the input images.  Simply averaging images together does not remove the blurring!  You'd have to average the images together before the blurring (de-noiseing) stage for this to work.
The result of this is that it's far easier to configure an unbiased renderer to run on a distributed system than a biased renderer.
(L) [2012/01/25] [ost
by ingenious] [consistency and unbiasedness] Wayback!This is indeed a valid point, and has been acknowledged already (see my first post in this thread). However, with some knowledge about your biased estimator, you can still set up distributed rendering properly. Most consistent estimators used are based on some sort of filtering, where you have a parameter (e.g. kernel radius) that trades off variance and bias. Considering that the final estimate error is proportional to the MSE = variance + bias^2, then if you have N rendering nodes, ensuring that each node i produces an image with variance N times higher than bias squared (i.e.  variance = N * bias^2), then you get an optimal error after the images are averaged. In the particular case of progressive photon mapping, as Knaus & Zwicker showed, you can easily compute and average the images with different radii independently.
Finally, in practice you don't want to use purely random numbers anyway, as the resulting estimators have lower order of convergence than when using quasi-random sequences. So you may still want to sync seeds between the nodes. It's not a black & white situation at all [SMILEY :)]
back