Sampling lights back
Board:
Board index
Raytracing
General Development
(L) [2011/12/19] [tby jbikker] [Sampling lights] Wayback!Hello,
A while ago I asked some questions about light sampling. Sadly, this was on the old forum, and I can't access the info anymore.
The thing is this: I used to sample lights by selecting a random light source, after which I picked a random position on the light. To reduce variance, I favor large, bright lights over small, dim ones. I did not take orientation or position of the lights into account. Doing this would require evaluation of all light sources for every intersection point; this seems inefficient for scenes with many lights. However, merely basing probability on size and brightness has a major disadvantage: small lights (such as lamp posts) are flooded by a sun, and about 50% of the time, lights are selected that do not even face the intersection point.
So a while ago I came up with the following solution: first, a limited set of lights is chosen based on size and intensity (but, every light has a chance of being selected). Then, for each light in this set I calculate the potential contribution for the intersection point (based on position and orientation, but not visibility). Based on these contributions, I make the final selection. Everything stays unbiased, but now the probability of selecting a far-away, non-facing light is small.
Recently I was told that this may not be a new idea. Apparently, the idea of selecting lights based on distance and orientation is already used (although not in a two-step process). But, I can't find any info on this... Does anyone know of a good reference?
- Jacco.
(L) [2011/12/19] [tby ingenious] [Sampling lights] Wayback!This is a classic importance resampling approach. It could have been me that told you this, I remember such a thing.
The idea is as you describe it: first select a few points in the sampling domain using some easy distribution. Then evaluate a more complex weight for each of these points, in your case the precise unoccluded radiance contribution. Finally, use these evaluations as weights for resampling. And actually, in your case, lights/points that have zero unoccluded contribution should have zero chance of being picked in the resampling step.
I can give you two references about this, which are more recent applications of the resampled importance sampling (RIS) approach to illumination. They cite the original paper.
* Bidirectional Importance Sampling for Direct Illumination
* Importance Resampling for Global Illumination (there's also a well-written master thesis with this title, by Justin Talbot, I recommend it)
(L) [2011/12/19] [tby spectral] [Sampling lights] Wayback!Interesting subject... in fact there are so much possibles cases [SMILEY :-P]
Here are some references that I have found, not read yet :
[LINK https://sites.google.com/site/isrendering/]
[LINK http://graphics.cs.ucf.edu/gpusampling/]
[LINK http://sirkan.iit.bme.hu/~szirmay/lightsource.pdf]
[LINK http://www.cs.ubc.ca/~heidrich/Papers/PG.06.pdf]
[LINK http://www.open.ou.nl/ako/publications/eg92.pdf]
[LINK http://www.google.be/url?sa=t&rct=j&q=light+sampling&source=web&cd=31&ved=0CBwQFjAAOB4&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.62.4195%26rep%3Drep1%26type%3Dpdf&ei=m0fvTpqIBMmaOueRhKkI&usg=AFQjCNGDUCRDz-uf7M1-dLMnUNjrIaSX6A]
Keep us informed if you found something interesting [SMILEY ;-)]
(L) [2011/12/19] [tby madd] [Sampling lights] Wayback!Old Shirley papers: [LINK http://www.cs.utah.edu/~shirley/papers/rw91.pdf]
and [LINK http://www.cs.utah.edu/~shirley/papers/tog94.pdf].
(L) [2011/12/19] [tby jbikker] [Sampling lights] Wayback!Thanks a lot! I'm going to digest.
EDIT: "Importance Resampling for Global Illumination" is exactly what I was looking for. Section 3, third paragraph:
"We can (...) generate a set of samples from a source distribution, p, weight these samples appropriately, then resample these samples by drawing a single sample from them with probability proportional to its weight."
(L) [2011/12/20] [tby spectral] [Sampling lights] Wayback!Great,
Also, can you tell us how you do to have a non-uniform sampling of the light sources ? by example if a light source has 2x more power than another one, you sample it 2x more ? once done, how you normalize the contribution ?
How do you do this on the GPU ? Maybe there is an issue ?
(L) [2011/12/20] [tby jbikker] [Sampling lights] Wayback!Here's the theory how I understood and implemented it:
We have a point P for which we want to sample direct illumination, and a random ray L starting at P and extending towards a random direction over the hemisphere of P.
This probability of this ray hitting a particular light source is proportional to the area of the light source, its squared distance, and its projected solid angle N1 dot L * N2 dot -L. Scaled by the intensity of the light, this yields the correct intensity if we only perform a random walk. In practice, this leads to excessive noise, and therefore we sample direct lighting explicitly:
We send a ray to a randomly chosen light source (equal probability for all lights, regardless of properties), and we scale its intensity by its area, its squared distance and projected solid angle (and of course visibility). The result of this approach is unbiased, but the result may still suffer from excessive variance. We thus prefer lights that are closer, or larger, or with a larger projected solid angle. Favoring those is straightforward: if we double the probability that we sample a particular light source, we must half whatever we find there, because twice as many rays (on average) will sample that light. In general, if we scale the probability of selecting a light source by p, we scale its contribution by 1/p. The result is unbiased for any p (under certain basic conditions), but a p that closely matches the actual distribution of the incoming light is obviously optimal in terms of variance.
To quickly select a light, we should use info that we can precalculate, and for which we do not have to evaluate all lights for every shading operation. Therefore, Brigade uses an array of N elements. For M lights, this array is partitioned in M parts; all array elements in one part point to the same light. Without any scaled probability, each light would thus use N/M array elements. By scaling the parts, we scale the probability of selecting a particular light source.
A light can now be selected from this array based on a single random variable. Note that the discrete nature of the array introduces a small error; this is acceptable in our implementation.
So, we pick a number of lights (and probabilities) from this array. For each of these lights, we accurately calculate the potential contribution to the direct illumination of P. We ignore visibility, for obvious reasons, so the contribution is still an estimate. We then pick one of the selected lights, with a probability proportional to the more accurate estimate. We scale the original probability (stored in the array) by the new probability to find the final probability. Done.
In this scheme, variance is minimal when we sample all lights (or all array elements). However, we are not really looking for the lowest variance for a fixed number of shadow rays: instead, we want the lowest variance in a fixed amount of time. Accessing the array is relatively expensive in CUDA, while all calculations are almost 'free'. Therefore, we sample each light *twice*, hoping that we get some samples on large light sources. This doubles our number of samples (almost) for free.
I hope this makes sense. [SMILEY :)]
- Jacco.
(L) [2011/12/21] [tby spectral] [Sampling lights] Wayback!Very similar to my implementation [SMILEY :-P]
So, I have also think to use a 'curve' to sample the lights. By example, imagine you have an emitters array, this way
Code: [LINK # Select all]struct
{
int shaderId;
int meshStart;
int meshCount;
} emitter;
emitter emitters[10]; // We have 10 emitters here
int lightSamplingCurve[10 * 100]; // up to 100x more samples for an emitter
float lightSamplingCurveNormalizationFactor[10 * 100];
Then, use an uniform sampling on the curve :
Code: [LINK # Select all]sampleId = lightSamplingCurve[rnd * 10 * 100];
emitter = emitters[sampleId];
...
compute your contribution
...
// Scale your contribution
L = L / lightSamplingCurveNormalizationFactor[sampleId];
The real difficulty is to "tune" your strategy to generate the best curve [SMILEY :-)]
Remarks : I have not implemented it, but it is an idea to keep your emitters array simple. This way you can even increase/decrease your curve precision too.
(L) [2011/12/21] [tby ingenious] [Sampling lights] Wayback!I don't understand the point of having a list with size larger than the number of light sources. What is this good for? The only reason I can think of is to avoid having a discrete CDF to traverse when picking a light. Uniformly select one element from the large list will then more often pick lights that are linked to by more elements in the list. But is that really such a useful optimization? It's actually not biased (or at least can be easily made unbiased), but it can be suboptimal when different lights have varying relative importance, since this effective quantization limits the precision. What you can also do is store in this list actual light source samples, thereby saving the effort of drawing 2 random numbers and sampling the chosen light source during shading. These samples are then essentially virtual point lights (VPLs).
(L) [2011/12/21] [tby spectral] [Sampling lights] Wayback!Imagine you have the following scene :
- 1000 lights in a closed box
- 1 light out of the box
- 1 camera out of the box
If you use uniform sampling, you will have a black image ! Because you have 1/1000 chance to sample the right light.
So, you need +- 1000 sample to have a correct one.
If you are able to generate another distribution... you will sample the lights more efficiently.
Of course, it is a special case [SMILEY :-P] But there are a lot of possibles cases....
back