Re: Measure the convergence speed back
Board:
Board index
Raytracing
General Development
(L) [2014/03/07] [ost
by Tristan] [Re: Measure the convergence speed] Wayback!>>  It's a good read, but I think there have been improvements in the basic approach since it was published.
Any pointers to those improvements?  [SMILEY :)]
(L) [2014/03/07] [ost
by friedlinguini] [Re: Measure the convergence speed] Wayback!>> Tristan wrote: It's a good read, but I think there have been improvements in the basic approach since it was published.
Any pointers to those improvements?  
[LINK http://www.cgg.unibe.ch/publications/2011/adaptive-sampling-and-reconstruction-using-greedy-error-minimization]
[LINK http://www.cgg.unibe.ch/publications/2012/adaptive-rendering-with-non-local-means-filtering]
[LINK http://www.cmlab.csie.ntu.edu.tw/project/sbf/]
[LINK http://www.ece.ucsb.edu/~psen/Papers/EG13_RemovingMCNoiseWithGeneralDenoising.pdf]
Same basic algorithm in each--render a noisy image, apply some kind of denoising filter, estimate the per-pixel error, drive more samples to reduce the error, rinse, lather, repeat.
(L) [2014/03/07] [ost
by ypoissant] [Re: Measure the convergence speed] Wayback!Thanks for the pointers to those documents. Now on my to-read list.
(L) [2014/04/03] [ost
by Dade] [Re: Measure the convergence speed] Wayback!Recently, I did some work on this topic and based on some of the papers listed in this thread. I'm very happy of the results. You can find a description of the work here: [LINK http://www.luxrender.net/forum/viewtopic.php?f=8&t=10955]
And a demo video here: [LINK https://www.youtube.com/watch?v=P_QmdpnKTW4]
(L) [2014/04/04] [ost
by mpeterson] [Re: Measure the convergence speed] Wayback!hhmm, open source renderers catch up. not bad.
A+
(L) [2014/04/06] [ost
by ypoissant] [Re: Measure the convergence speed] Wayback!>> Dade wrote:Recently, I did some work on this topic and based on some of the papers listed in this thread. I'm very happy of the results.
Following your post, I also implemented the algorithm as outlined in "Progressive Path Tracing with Lightweight Local Error Estimation". However, I found that using (AllSampledImage(x,y) - OnlyEvenSampledImage(x,y))^2 to compute the variance didn't work too well. The issue I had is when a firefly appears in an even pass, then this firefly will be present in both AllSampledImage and OnlyEvenSampledImage and then is not detected as variance. In order to get good results with fireflies, I had to use (OnlyOddSampledImage(x,y) - OnlyEvenSampledImage(x,y))^2 to compute the variance. This works much better. I have to average both Odd and Even buffers to get the final render result though.
(L) [2014/04/06] [ost
by friedlinguini] [Re: Measure the convergence speed] Wayback!>> ypoissant wrote:Following your post, I also implemented the algorithm as outlined in "Progressive Path Tracing with Lightweight Local Error Estimation". However, I found that using (AllSampledImage(x,y) - OnlyEvenSampledImage(x,y))^2 to compute the variance didn't work too well. The issue I had is when a firefly appears in an even pass, then this firefly will be present in both AllSampledImage and OnlyEvenSampledImage and then is not detected as variance. In order to get good results with fireflies, I had to use (OnlyOddSampledImage(x,y) - OnlyEvenSampledImage(x,y))^2 to compute the variance. This works much better. I have to average both Odd and Even buffers to get the final render result though.
The two are equivalent, other than a constant scale factor (All = Odd/2 + Even/2 => All - Even = Odd/2 - Even/2). If the firefly comes from a single even-numbered sample, then it should be twice as bright in the even image, since there are half as many total samples. Perhaps you were taking the difference after tone mapping?
(L) [2014/04/07] [ost
by ypoissant] [Re: Measure the convergence speed] Wayback!>> friedlinguini wrote:The two are equivalent, other than a constant scale factor (All = Odd/2 + Even/2 => All - Even = Odd/2 - Even/2). If the firefly comes from a single even-numbered sample, then it should be twice as bright in the even image, since there are half as many total samples. Perhaps you were taking the difference after tone mapping?
Yes. DIfference after tone mapping. That is what the authors of the "... Lightweight Local Error Estimation" article do because the error is computed in the perceptual color space so to speak. This seemed to make sense because then several pixels which saturate to white don't need to be refined and several pixels that compress toward white because of the tone mapping are quicker to reach their low variance state.
Now, even when using (Odd - Even)^2 there are rare situations where fireflies happen in the same pixel in both odd and even buffers and then they don't get refined. So the idea of computing variance after tone mapping is probably not such a good idea after all. Only one firefly is one too many.
(L) [2014/04/07] [ost
by tarlack] [Re: Measure the convergence speed] Wayback!DISCLAIMER : this is not a "look at my marvelous publications" post, I'm not in public research anymore so my impact factor is something I don't care about at all  [SMILEY :mrgreen:] It's just that the two publications I list are simple yet effective and robusts solutions (I do not like non-robusts algorithms).
I tried quite a bit of approaches for adaptive sampling, and honestly the simplest method I could think of gave the best results, and for a simple reason : for a correct variance estimation, you have to make the "variance estimation" variance decrease itself, and it seems that most methods do not ensure it. This gave a maybe not optimal method sample-wise but ensured to converge anyway, and highly robust in all my tests : interleave uniform and adaptive sampling passes. This way, the variance estimation itself is ensured to have its own variance go to zero, and thus the estimated error is ensured to be correct, except in highly pathological cases where no sample reaches the "create-a-firefly" case.
All is in this poster : [LINK https://www.researchgate.net/publication/248399078_robust_adaptive_sampling_poster?ev=prf_pub]
For fireflies, I tried many things as well, from image-space to sample-space...As long as the number of fireflies on the image is not tremendous, I found that robust image-based methods could give surprisingly good results. All the image-based methods I knew of introduced blur, so I made a simple image-based technic which has the great advantage to not introduce any blur neither evident artifacts, based on detection and """smart""" reconstruction of the detected fireflies on HDR images (NOT bilateral filtering which relies on non-robust statistics). Although I had a hard time admitting it because I don't like image-based methods (bias, beurk...), it gave impressive results on my tests scenes. All infos in this 2-pages paper :  [LINK https://www.researchgate.net/publication/237008971_Effective_Despeckling_of_HDR_Images?ev=prf_pub], just take a look at the top row of images.
(L) [2014/04/07] [ost
by Dade] [Re: Measure the convergence speed] Wayback!>> ypoissant wrote:Dade wrote:Recently, I did some work on this topic and based on some of the papers listed in this thread. I'm very happy of the results.
Following your post, I also implemented the algorithm as outlined in "Progressive Path Tracing with Lightweight Local Error Estimation". However, I found that using (AllSampledImage(x,y) - OnlyEvenSampledImage(x,y))^2 to compute the variance didn't work too well. The issue I had is when a firefly appears in an even pass, then this firefly will be present in both AllSampledImage and OnlyEvenSampledImage and then is not detected as variance. In order to get good results with fireflies, I had to use (OnlyOddSampledImage(x,y) - OnlyEvenSampledImage(x,y))^2 to compute the variance. This works much better. I have to average both Odd and Even buffers to get the final render result though.
Are you using the average of all estimated pixel variances in the tile (like in the papers) or the max. ? I'm using the max., it gives better result to me. I like to be more "consistent and robust" than "optimal and sometime wrong".
The idea is that the estimated variance can be wrong for one pixel but being wrong for all 32x32 pixels is practically impossible.
P.S. thanks Tarlack, going to read them.
(L) [2014/04/07] [tby ypoissant] [Re: Measure the convergence speed] Wayback!Dade: My tile size is 8x8. I use the max error of 2x2 pixels.
I'm going to read the other papers too. Thanks Tarlak.
back