Shading normal issues w. adjoint bsdf back
Board:
Home
Board index
Raytracing
General Development
(L) [2019/02/16] [ost
by dawelter] [Shading normal issues w. adjoint bsdf] Wayback!Hi there,
I seek help for robustly implementing Veach's correction for shading normals for the adjoint BSDF, i.e. for photon mapping or light tracing.
The problem: According to Veach we should add a correction factor that compensates for using shading normals instead of geometry normals:
eq5.18.PNG
However, wo*Ns/wo*Ng can become arbitrary large since the denominator does not cancel and can become very small at grazing angles. I saw this factor go up to 1000. And this introduced ugly fireflies in my render. I clamped the factor to 10 as a band aid fix. But this is certainly not very pretty. It did help though with no noticeable change in the image.
How do production renderers deal with this?
I took a look at PBRT's implementation and found that in BDPT they use the correction straight forwardly
[LINK https://github.com/mmp/pbrt-v3/blob/master/src/integrators/bdpt.cpp#L55]
To my surprise I did not found the correction in the photon mapping code!
[LINK https://github.com/mmp/pbrt-v3/blob/master/src/integrators/sppm.cpp#L403]
There ist just the regular
Code: [LINK # Select all]Spectrum bnew =
beta * fr * AbsDot(wi, isect.shading.n) / pdf;
Did I manage to find a bug? Is it a conscious decision to omit it to prevent fireflies?
(L) [2019/02/16] [ost
by dawelter] [Shading normal issues w. adjoint bsdf] Wayback!Hi there,
I seek help for robustly implementing Veach's correction for shading normals for the adjoint BSDF, i.e. for photon mapping or light tracing.
The problem: According to Veach we should add a correction factor that compensates for using shading normals instead of geometry normals:
[IMG #1 eq5.18.PNG]
However, wo*Ns/wo*Ng can become arbitrary large since the denominator does not cancel and can become very small at grazing angles. I saw this factor go up to 1000. And this introduced ugly fireflies in my render. I clamped the factor to 10 as a band aid fix. But this is certainly not very pretty. It did help though with no noticeable change in the image.
How do production renderers deal with this?
I took a look at PBRT's implementation and found that in BDPT they use the correction straight forwardly
[LINK https://github.com/mmp/pbrt-v3/blob/master/src/integrators/bdpt.cpp#L55]
To my surprise I did not found the correction in the photon mapping code!
[LINK https://github.com/mmp/pbrt-v3/blob/master/src/integrators/sppm.cpp#L403]
There ist just the regular
Code: [LINK # Select all]Spectrum bnew =
beta * fr * AbsDot(wi, isect.shading.n) / pdf;
Did I manage to find a bug? Is it a conscious decision to omit it to prevent fireflies?
[IMG #1]:Not scraped:
/web/20201124205130im_/https://ompf2.com/download/file.php?id=307&sid=9b299db27be854000b6abe7459deb5fa
(L) [2019/02/22] [ost
by shocker_0x15] [Shading normal issues w. adjoint bsdf] Wayback!I don’t have a clear answer for your question, but I think there is a limitation with shading normal because it doesn’t have physical basis (as Veach noted).
One apparent way to mitigate arbitrarily large values is using well polygonized model which has shading normal similar to geometric normal...
I completely agree with your surprise.
I felt the same surprise when I saw the implementation of PBRT v3.
It should take the correction factor into account in renderers other than BPT too in my opinion because PBRT is not a production oriented, it is education oriented.
(L) [2019/02/23] [ost
by dawelter] [Shading normal issues w. adjoint bsdf] Wayback!Hi.
Thanks for your reply. At least it is good that I didn't totally misunderstood something there.
For some reason the correction factor went way down, when I used another mesh (higher res version of the stanford bunny). I could swear it has a similar amount of jagged edges. I start to think, that either I have a weird bug somewhere, or the mesh has errors.
Apart from investigating this, I will stick to clamping. It should be fine.
(L) [2019/02/23] [ost
by shocker_0x15] [Shading normal issues w. adjoint bsdf] Wayback!It might be help for you to visualizing the relation between geometric normal and shading normal like this:
clamp(
    RGB(
        0.5 + 10 * (dot(shadingNormal, geometricNormal) - 1),
        0.5 + 100 * (length(geometricNormal) - 1),
        0.5 + 100 * (length(shadingNormal) - 1)
    )
, 0, 1)
This shows completely a gray image in the ideal and correct situation (shadingNormal == geometricNormal and the lengths of two normals are 1).
Multiplier like 10 * or 100 * are used to exaggerate difference from the ideal.
(L) [2019/02/24] [ost
by dawelter] [Shading normal issues w. adjoint bsdf] Wayback!Doh!  [SMILEY :x]  I was about to file an issue. Instead I got my answer right there ...
[LINK https://github.com/mmp/pbrt-v3/issues/209]
@Shocker: Yeah ... It's a bit overkill but if I cannot understand the differences between the mesh otherwise, why not. [SMILEY ;-)]
(L) [2019/03/24] [ost
by T.C. Chang] [Shading normal issues w. adjoint bsdf] Wayback!Haha, I was the one that filed the issue. In my implementation I keep the correction term since I do not like the idea of turning a consistent method into an inconsistent one. If I recall correctly, Jakob said that (in Mitsuba renderer) a better importance sampling technique is needed for the term. I also felt the same surprise as you do.
(L) [2019/03/26] [ost
by charles] [Shading normal issues w. adjoint bsdf] Wayback!You can find an alternative formulation in this paper as well: [LINK https://blogs.unity3d.com/2017/10/02/mkicrofacet-based-normal-mapping-for-robust-monte-carlo-path-tracing/?_ga=2.22915315.912987470.1553609190-920979160.1501401260]
I haven’t implemented it myself, but it is supposed to be symmetric and more stable.
(L) [2019/03/27] [ost
by dawelter] [Shading normal issues w. adjoint bsdf] Wayback!Hi Charles,
thank you. The link has a typo, but it's clear which paper you mean: "Microfacet-based Normal Mapping for Robust Monte Carlo Path Tracing". The results look really nice.
One thing bugs me though. In Figure 18th's caption, the authors state
"Our normal-mapping model ... fails the white furnace test if vertex normals are interpolated (please zoom in, bottom right). This is a separate problem not addressed in this work."
I fail to understand how vertex normals are fundamentally different from normal mapping. Because one could bake interpolated normals into a normal map. Thus interpolated normals are equivalent to a special case of a normal map, or aren't they?
Anyway, I found somewhat related work "Linear Efficient Antialiased Displacement and Reflectance Mapping". by Dupuy et al. (2013). It is really about anti aliasing. However, it involves construction of a NDF where the mean normal points in a desired direction (Eq. 8). In this regard it seems to address the same issues as the former paper. I only skimmed over the paper. So apologies if I misrepresent anything.
Through Karl Li's blog I found another interesting paper: "Consistent Normal Interpolation". [LINK https://blog.yiningkarlli.com/2015/01/consistent-normal-interpolation.html]. Have to question compatibility with bidirectional methods because calculation of the normal needs the direction of the incident ray. So eye and light random walks would "see" different normals on a given surface point. Seems to work for Karl Li though.
back