Population Monte Carlo Sampling back

Board: Home Board index Raytracing General Development

(L) [2014/01/05] [ost by raider] [Population Monte Carlo Sampling] Wayback!

Hi all!
Anybody tried "Population Monte Carlo" samplers?
How is it comparing to Metropolis/MIS approaches (in terms of performance, storage requirements, difficult lighting setups)?
Here is what I mean:
[LINK http://pages.cs.wisc.edu/~yu-chi/research/pmc/PMC_files/pmc.pdf]
(L) [2014/01/06] [ost by Dietger] [Population Monte Carlo Sampling] Wayback!

It seems like an interesting method, but i never tried it. Note that the PMC methods in the mentioned paper are not very advanced because the resampling step is completely omitted. Basically, the sampling method is adapted between iterations based on information gathered during previous iterations. As long as the probabilities are computed correctly and never drops to zero for contributing samples this obviously works, but I am sure this has been used before without mentioning PMC. It would be interesting to see if the resampling step could be used for light transport in a constructive way.

A practical issue I see with the methods from this paper is that they store/adapt the mixing weights per pixel. This works fine for adapting the per pixel sampling rate or sampling decisions at the first bounce (usually), but is hard to generalize to other sampling decisions in a path tracer. You could of course try to store the mixing weights in world space instead of screen space (akin to Jensen's "Importance Driven Path Tracing using the Photon Map"), but such sampling magic is tricky to get right and easy to break.

Dietger
(L) [2014/01/06] [ost by Dade] [Population Monte Carlo Sampling] Wayback!

I may be wrong but Octane should currently use PMC.
(L) [2014/01/07] [ost by raider] [Population Monte Carlo Sampling] Wayback!

Dietger, thanks a lot! Quite explanatory.
(L) [2014/01/07] [ost by friedlinguini] [Population Monte Carlo Sampling] Wayback!

I found a somewhat later paper ([LINK http://pages.cs.wisc.edu/~yu-chi/research/pmc-er/PMCER_files/pmc-er-egsr.pdf]) to be intriguing, as it tries to combine PMC and ERPT.
(L) [2014/01/08] [ost by Dietger] [Population Monte Carlo Sampling] Wayback!

As I stated earlier on this forum ([LINK http://ompf2.com/viewtopic.php?f=3&t=789&p=2268#p2268 viewtopic.php?f=3&t=789&p=2268#p2268]), I have some doubts concerning the PMC-ERPT paper. To me, the paper is skipping over important proofs and details and I am therefore not convinced that it actually makes sense. But I would love to be proven wrong [SMILEY :)]
(L) [2014/01/08] [tby Dietger] [Population Monte Carlo Sampling] Wayback!

As I stated earlier on this forum ([LINK http://ompf2.com/viewtopic.php?f=3&t=789&p=2268#p2268]), I have some doubts concerning the PMC-ERPT paper. To me, the paper is skipping over important proofs and details and I am therefore not convinced that it actually makes sense. But I would love to be proven wrong [SMILEY :)]
(L) [2014/01/08] [ost by Zelcious] [Population Monte Carlo Sampling] Wayback!

PMC is nothing magically, I'd even argue that is nothing new, it's just importance sampling used in a certain way. There is no extra benefit.
I've been experimenting a lot with different versions importance sampling, including PMC and I've learned one thing.
You have to importance sample ALL the peaks otherwise uniform sampling will be more efficient.
Or
You will have to introduce bias and filter away high contributing paths. I think this is what Octane does even though they claime to be physically correct (it's probably an option)

I recently wrote a unbiased method where I built global importance maps for regions in space with accumulated statistics from light tracing.
I used it for unbiased rendering of extremely difficult caustics scenes and it was really efficient.
BUT, the light tracing only picked up 99.99% of the caustics, so once in a while you would hit a path that wasn't directly covered by the importance sampling and would blow up and you get fireflies on your rendering.

A thing that you always have to remember is that variance reduces by a factor of sqrt(N) so it takes forever to sample away fireflies.

It's easy to be fooled because you get a recognizable picture much much faster but those fireflies is really hard to get rid of and uniform sampling will win in the long run unless you importance sample all the peaks. It's all very logical.

A simple example: Say you have 1000 peaks in your sample space and decide to spend half of your samples to importance sample 999 of the peaks and uniform for the rest. Then you spend half as many samples on the last peak and it will take 4 times longer for it to converge. So those 999 peaks will converge really fast but the one that you didn't cover will be worse off and force the whole rendering to take longer unless you are willing to cheat a bit.

I also didn't like the PMC-ERPT paper. I think the modification makes it biased. I think I found a proof of that when I read it.
(L) [2014/01/09] [ost by Dade] [Population Monte Carlo Sampling] Wayback!

Zelcious, correct me if I'm wrong, but the short version of your post, and the answer to the original thread question, is: Metropolis is superior to PMC (because it is able to sample all the 1000 "peaks").

P.S. I have never tried PMC so I can not argue but I'm not surprised because Metropolis has always worked quite well in my experience.
(L) [2014/01/09] [ost by Dietger] [Population Monte Carlo Sampling] Wayback!

>> Dade wrote:Metropolis is superior to PMC (because it is able to sample all the 1000 "peaks").
Unfortunately its slightly more complicated than that. MLT is only as good as its mutation strategies. Sure it samples the peaks proportional to their contribution, but if the mutation strategies cannot effectively explore the peaks the correlation between MLT samples will be massive and thus result in fireflies (even worse, fireflies created at the cost of many samples per firefly instead of just one). Imagine the pathological example of a Kelemen-style MLT implementation with ONLY large step mutations. Yes it will sample perfectly proportional to the radiance function, and yes, it will SUCK!

So roughly speaking, just like MIS (and PMC) can only get rid of all fireflies if they sample ALL peaks effectively, also MLT can only get rid of all fireflies if its mutation strategies can effectively explore ALL peaks.
(L) [2014/01/09] [ost by raider] [Population Monte Carlo Sampling] Wayback!

hmm... doe's it mean that any adaptive MC algorithm is hopeless in general as it introduces bias? Is it (at least theoretically) possible to have a priori bounded bias in adaptive MC?

back