Re: Efficient sampling of many lights back

Board: Home Board index Raytracing General Development

(L) [2014/03/21] [ost by cessen] [Re: Efficient sampling of many lights] Wayback!

>> friedlinguini wrote:The fact that you don't like it doesn't make it biased.
Ha ha, fair enough. [SMILEY :-)] Indeed, my understanding of the term "biased" was incorrect.  Lightcuts is unbiased.

Nevertheless, the lightcuts paper presents an algorithm that without modification will never converge to a correct solution (even modulo caustics from perfect reflectors/refractors), and results in visually objectionable flickering artifacts in animation.  And one of the main benefit of lightcuts, as I understand it, is precisely that smooth structured error, which in still images is not at all obvious as error.  From an applications and usability standpoint, that puts lightcuts in the same camp as e.g. photon mapping and radiance caching in my mind.  So that's where I was coming from.

But, again, still unbiased.  I understand that now.
(L) [2014/03/21] [ost by ingenious] [Re: Efficient sampling of many lights] Wayback!

>> cessen wrote:Nevertheless, the lightcuts paper presents an algorithm that without modification will never converge to a correct solution (even modulo caustics from perfect reflectors/refractors), and results in visually objectionable flickering artifacts in animation.  And one of the main benefit of lightcuts, as I understand it, is precisely that smooth structured error, which in still images is not at all obvious as error.  From an applications and usability standpoint, that puts lightcuts in the same camp as e.g. photon mapping and radiance caching in my mind.  So that's where I was coming from.
Since all sampling in Lightcuts is unbiased, as you average the results of many independently rendered images, you will converge to the correct solution. What's bothering you is the correlated noise, which is the same noise as in path tracing, just that some of the random numbers drawn are shared for all pixels, hence the visual patterns.

What do you think about Coherent Path Tracing then:
[LINK http://graphics.ucsd.edu/~iman/coherent_path_tracing.php]

Is it unbiased according to your understanding? [SMILEY :)] It's exactly as unbiased as Keller's original instant radiosity (minus the VPL clamping).

There is a difference between correlated sampling and bias. In fact, the "artifacts" you see in (progressive) photon mapping are 99% due to correlated sampling in typical scenes with lots of flat surfaces. They are not bias. For example, you can generate 10 independent photon maps and for each pixel choose a random one to do your query. This will decorrelate the photon sampling to some degree, though it will lower its efficiency.
(L) [2014/03/21] [ost by cessen] [Re: Efficient sampling of many lights] Wayback!

>> ingenious wrote:What do you think about Coherent Path Tracing then:
[LINK http://graphics.ucsd.edu/~iman/coherent] ... racing.php

Is it unbiased according to your understanding?
So, first I'll make a distinction between my current understanding of the term "unbiased", and my understanding of it prior to this discussion.

My prior (incorrect) understanding of the term was that it essentially meant, "Given infinite time, this algorithm will converge to the correct solution."  Under that (incorrect) definition, the paper you reference above is, indeed, unbiased.  But lightcuts as presented in the original paper still is not.

But, interestingly, neither is the algorithm I presented in the OP when taken on its own.  The function pseudo-code I presented does not iterate to find infinitely many lights over time, it only selects a single light.  For it to converge to a correct solution, it needs to be used by another algorithm that _does_ iterate.  This is analogous to lightcuts.

My new understanding of the term "unbiased" after this discussion includes both lightcuts and the function psuedo-code I presented (at least, if you replace the input variable of the function with a random number source), because the choices they make are not biased in any particular direction, even if those choices are only made once.

Is my new understanding closer to correct?  Am I still missing something?
(L) [2014/03/22] [ost by friedlinguini] [Re: Efficient sampling of many lights] Wayback!

>> cessen wrote:Is my new understanding closer to correct?  Am I still missing something?
There's probably a FAQ somewhere about this.

"Unbiased" means that the expected value for a random sample equals the correct value. I.e., if you look at every possible sample you might generate with a rendering algorithm, weight each of those samples by the probability of generating it, add all of these up, and get the correct answer, then the algorithm is unbiased.
"Consistent" means that given enough time, an algorithm will converge to the correct value. I think this is what you were thinking of as unbiased.

These two notions are orthogonal to one another:
Unbiased + consistent: Path tracing, metropolis light transport, lightcuts with repeated VPL generation.
Biased + consistent: Progressive photon mapping, VCM, path space regularization.
Unbiased + inconsistent: Lightcuts with a fixed number of VPLs, path tracing with the same random number seed at every iteration.
Biased + inconsistent: Photon mapping, irradiance caching.
(L) [2014/03/22] [ost by friedlinguini] [Re: Efficient sampling of many lights] Wayback!

Googled a bit. Before ingenious yells at me I should point out this paper: [LINK http://cs.au.dk/~toshiya/misc.pdf]

It explains things more thoroughly than I did and also points out start-up bias in metropolis light transport.
(L) [2014/03/22] [ost by ingenious] [Re: Efficient sampling of many lights] Wayback!

>> cessen wrote:...Under that (incorrect) definition, the paper you reference above is, indeed, unbiased.  But lightcuts as presented in the original paper still is not.
You keep saying this, but I still don't know what your reasoning is.
 >> friedlinguini wrote:Googled a bit. Before ingenious yells at me I should point out this paper: [LINK http://cs.au.dk/~toshiya/misc.pdf]

It explains things more thoroughly than I did and also points out start-up bias in metropolis light transport.
Yeah, I wonder why Toshiya hasn't put this report on his web page. It does indeed nicely summarize some very common misconceptions. Though it doesn't touch on correlated sampling.
(L) [2014/03/22] [ost by stefan] [Re: Efficient sampling of many lights] Wayback!

>> friedlinguini wrote:These two notions are orthogonal to one another:
Unbiased + consistent: Path tracing, metropolis light transport, lightcuts with repeated VPL generation.
Biased + consistent: Progressive photon mapping, VCM, path space regularization.
Unbiased + inconsistent: Lightcuts with a fixed number of VPLs, path tracing with the same random number seed at every iteration.
Biased + inconsistent: Photon mapping, irradiance caching.
I'm not so sure about the latter two. I believe Mr Jensen refers to photon mapping as consistent (= trow an infinite number of photons at it, and it will converge to the correct result). Irradiance caching with a very high number rays per cache entry and a very low max error gets to to fewer one cache entry per pixel, therefore converging to an equivalent path traced result. I haven't seen anyone implement progressive irradiance caching yet (I've been meaning to try), but it would be an interesting experiment.
(L) [2014/03/22] [ost by stefan] [Re: Efficient sampling of many lights] Wayback!

>> friedlinguini wrote:Googled a bit. Before ingenious yells at me I should point out this paper: [LINK http://cs.au.dk/~toshiya/misc.pdf]

It explains things more thoroughly than I did and also points out start-up bias in metropolis light transport.
Thanks, that is a great paper!
(L) [2014/03/22] [ost by cessen] [Re: Efficient sampling of many lights] Wayback!

>> friedlinguini wrote:cessen wrote:Is my new understanding closer to correct?  Am I still missing something?
"Unbiased" means that the expected value for a random sample equals the correct value. I.e., if you look at every possible sample you might generate with a rendering algorithm, weight each of those samples by the probability of generating it, add all of these up, and get the correct answer, then the algorithm is unbiased.
Yeah, after looking up the term "expected value", that's exactly what I meant.  I was imagining, for example, rolling a die, and it converging on 3.5 (apparently the "expected value").  But even with a single roll it is still unbiased, even though the result might be 2, because the die is fair and the error introduced is also thus "fair" in some sense.  (Of course, this analogy doesn't account for pdf's, etc. But it represents my intuitive understanding of the concept right now.)
 >> friedlinguini wrote:"Consistent" means that given enough time, an algorithm will converge to the correct value. I think this is what you were thinking of as unbiased.
I think my idea of unbiased was actually "unbiased + consistent".  I can see now how it is useful to separate those concepts.  Indeed, I'm much more concerned about consistent than unbiased.  I really appreciate you taking the time to help explain these concepts to me!

Incidentally, I'm not sure if this is a property of consistency or not, but since my target use-case is rendering animations, it is strictly important to me that rendering of animations be flicker-free.  Or, perhaps put another way, it's important to me that the continuity of the rendered result across multiple frames accurately represents the continuity of the scene description across multiple frames.
 >> friedlinguini wrote:Googled a bit. Before ingenious yells at me I should point out this paper: [LINK http://cs.au.dk/~toshiya/misc.pdf]
Awesome!  Thanks so much!  That definitely helps.  It's also comforting to see him use a similar die analogy as I was thinking of.
 >> ingenious wrote:cessen wrote:...Under that (incorrect) definition, the paper you reference above is, indeed, unbiased. But lightcuts as presented in the original paper still is not.
You keep saying this, but I still don't know what your reasoning is.
Lightcuts, as presented in the original paper, builds the point lights and the light tree once, and proceeds to do the rest of the rendering process with those lights and with that tree.  To use the die analogy again, it rolls the die once, and just uses that result.  No matter how long it runs, it never re-rolls the die.  Thus no matter how long it runs, it will never converge on the correct solution.  Unless I misunderstood the paper, or am missing something else...

That's not to say that you couldn't build a different algorithm that runs the original lightcuts algorithm in a loop with new random numbers each time, and combines the results.  That would then converge.  But the method as presented in the original paper would not.  Unless, again, I'm missing something.
(L) [2014/03/22] [ost by ingenious] [Re: Efficient sampling of many lights] Wayback!

>> cessen wrote:Lightcuts, as presented in the original paper, builds the point lights and the light tree once, and proceeds to do the rest of the rendering process with those lights and with that tree.  To use the die analogy again, it rolls the die once, and just uses that result.  No matter how long it runs, it never re-rolls the die.  Thus no matter how long it runs, it will never converge on the correct solution.  Unless I misunderstood the paper, or am missing something else...
No matter how long what runs? The algorithm, as presented in the paper, always runs a finite amount of time for a fixed number of initial virtual point lights. If you start increasing this number, then the light tree would become bigger. In order to converge to the correct solution, the light cut for each pixel should also increase with the size of the tree. Depending on the heuristics used in the cut constructions, this may happen or not. The original description in the Lightcuts paper bounds the size of the cut, as far as I remember. If this is the case, it won't converge as you increase the number of light samples.
 >> cessen wrote:That's not to say that you couldn't build a different algorithm that runs the original lightcuts algorithm in a loop with new random numbers each time, and combines the results.  That would then converge.  But the method as presented in the original paper would not.  Unless, again, I'm missing something.
Every unbiased method has been originally described as a single-pass finite-sample algorithm. Progressive rendering with an "outer loop" came much later, so in that regard Lightcuts is on the same "level" as path tracing.
 >> cessen wrote:Incidentally, I'm not sure if this is a property of consistency or not, but since my target use-case is rendering animations, it is strictly important to me that rendering of animations be flicker-free.  Or, perhaps put another way, it's important to me that the continuity of the rendered result across multiple frames accurately represents the continuity of the scene description across multiple frames.
In this case lightcuts, or any algorithm that heavily relies on inter-pixel correlated sampling, is not very suitable.
(L) [2014/03/23] [tby cessen] [Re: Efficient sampling of many lights] Wayback!

>> ingenious wrote:No matter how long what runs? The algorithm, as presented in the paper, always runs a finite amount of time for a fixed number of initial virtual point lights.
In the paper they put it in the context of a ray tracer that is still shooting eye rays.  So I was imagining that e.g. antialiasing and DoF would still be computed with monte carlo ray tracing.  But you're right, that's not actually part of the algorithm itself.  So more simply, it just won't converge.
 >> ingenious wrote:Every unbiased method has been originally described as a single-pass finite-sample algorithm. Progressive rendering with an "outer loop" came much later, so in that regard Lightcuts is on the same "level" as path tracing.
Yeah.  Although I think there is a difference in presenting an algorithm as one-pass, and presenting an algorithm as arbitrarily many passes.  Both are finite, but the latter typically implies some kind of limit solution.
EDIT:  Err... mixing up my terminology.  The above should have read:
"Yeah.  Although I think there is a difference in presenting an algorithm as single-sample, and presenting an algorithm as arbitrarily many samples.  Both are finite, but the latter typically implies some kind of limit solution."
Or something along those lines, anyway.
(L) [2014/03/25] [tby phkahler] [Re: Efficient sampling of many lights] Wayback!

I tried something a bit similar several years ago. My goal then was to use photon mapping for indirect lighting and rays for direct lighting. I put all light sources into an octtree. The lights would be inserted in the tree using a bounding box based on brightness. You could use more arbitrary shapes if you had a way to determine a relevant bounding volume. Then for illumination at a point, you'd just drill down the octtree nodes containing that point and use the lights at each level. My trees allow object (in this case lights) at all levels, not just leaf nodes. I had a maze with a light at each "corner" position running 1280x1024 at 5-10fps with over 200 lights (no photon map).
I've always considered a city like you show - with thousands of lights in a polygon soup, and lots of people walking around - to be a goal.
(L) [2016/03/31] [tby cessen] [Re: Efficient sampling of many lights] Wayback!

Never got around to writing a paper on the technique in the OP, but it looks like someone else did!
[LINK http://jcgt.org/published/0005/01/02/]
The second half of the paper describes a technique essentially the same as my OP, but in more formal terms.  Really cool!  Hopefully this approach will get more visibility now, and perhaps further research. [SMILEY :-)]
EDIT:
It seems I maybe got a little over-excited.  I read the paper again more thoroughly, and the technique they present isn't quite the same.  To converge to a consistent result, their approach requires multi-pass progressive rendering, whereas the approach in my OP works quite happily within a single pass.  Nevertheless, the concepts are very similar!  Variations on the same theme.

back