Robust Light Transport Simulation via Metropolised Bidirectional Estimators back
Board:
Home
Board index
Raytracing
Links & Papers
(L) [2016/10/24] [ost
by friedlinguini] [Robust Light Transport Simulation via Metropolised Bidirectional Estimators] Wayback!So I'm going through this paper, which was recently linked from Toshiya Hachisuka's page ([LINK http://www.ci.i.u-tokyo.ac.jp/~hachisuka/ups-vcm_mcmc.pdf]). The basic idea is trying to marry metropolis light transport with UPS/VCM, which is great.
There's one sentence in this paper that's driving me a bit nuts, though: "The eye subpaths are used to immediately evaluate the contribution of unidirectional path tracing and path tracing with next even estimation, as these sampling techniques are independent of light subpaths (line 3)." I think of next event estimation as bidirectional path tracing where the light path has a single node, so I don't see why it would be considered separately. Is every eye path given its own light path vertex to decorrelate the sample results? Moreover, path tracing handles every path that can be sampled in an unbiased way, but the light path is used for both BDPT and vertex merging. Is the work supposed to be partitioned (e.g., only using the light paths for indirect lighting), or does the contribution from the eye paths get an MIS weight?
If anybody else has read (or written!) this paper, could they shed some light (sorry)?
(L) [2016/10/24] [ost
by atlas] [Robust Light Transport Simulation via Metropolised Bidirectional Estimators] Wayback!I haven't read the paper, but I have implemented VCM. For example, SmallVCM creates light vertices at each bounce and does not create a light vertex on the light itself. It then casts eye rays and performs direct light sampling at each eye ray bounce. Maybe this is to reduce the overhead of storing a light vertex directly on the light along with MIS weights, when it can all just be computed quickly during the eye tracing phase?
(L) [2016/10/24] [ost
by friedlinguini] [Robust Light Transport Simulation via Metropolised Bidirectional Estimators] Wayback!The proposed algorithm shoots and stores one eye path per pixel, and then samples the light path using metropolis sampling, so one light vertex has negligible storage.
My best guess is that the NEE isn't strictly necessary, but it speeds things up and helps with stratification. But a guess is all it is.
(L) [2016/10/25] [ost
by atlas] [Robust Light Transport Simulation via Metropolised Bidirectional Estimators] Wayback!Maybe the direct connection to the light requires a subtended solid angle from the connecting eye vertex to the emissive surface area for MIS, something that distinguishes that light vertex from the other light vertices. There was a point when I understood most of it, but it's fuzzy to me right now without going back and looking at the code.
Thanks for the link though, I'll have to read up on this new method soon.
(L) [2016/10/25] [ost
by bachi] [Robust Light Transport Simulation via Metropolised Bidirectional Estimators] Wayback!>> friedlinguini wrote:The proposed algorithm shoots and stores one eye path per pixel, and then samples the light path using metropolis sampling, so one light vertex has negligible storage.
My best guess is that the NEE isn't strictly necessary, but it speeds things up and helps with stratification. But a guess is all it is.
I think this is the answer.  Given an endpoint on the eye subpath one can employ proper importance sampling on the light source for NEE (for example for spherical lights or environment map it is possible cull the part of the light source where the eye subpath can't see).  It also reduces the correlation with almost no cost.
(L) [2016/11/11] [ost
by ingenious] [Robust Light Transport Simulation via Metropolised Bidirectional Estimators] Wayback!>> bachi wrote:I think this is the answer.  Given an endpoint on the eye subpath one can employ proper importance sampling on the light source for NEE (for example for spherical lights or environment map it is possible cull the part of the light source where the eye subpath can't see).  It also reduces the correlation with almost no cost.
Indeed, the next-event estimation with a new independently sampled light source vertex is not strictly necessary but is typically done in bidirectional path tracing (BPT) implementations. The reason is to allow for better importance sampling for direct illumination (as you pointed out the spherical light example) but also to reduce sampling correlation. Recall that the way BPT traditionally performs connections — every eye subpath vertex to every light subpath vertex — produces a large number of full paths from only two subpaths. These full paths share vertices, which introduces sampling correlation. This correlation in turn increases the variance of the estimator. Ideally you want to sample the full paths completely independently, but the correlation technique is cheap and in practice gives you better efficiency which is 1 / (variance * sampling_effort). Still, for next-event connections (i.e. the technique that uses only one light subpath vertex), sampling a new vertex on the light source is typically cheap enough. Now, if sampling the light source is for some reason very expensive, you may be better off reusing the first vertex on the light subpath for every eye vertex.
back