Re: My little path tracer back
Board:
Home
Board index
Raytracing
Visuals, Tools, Demos & Sources
(L) [2018/03/16] [ost
by dawelter] [Re: My little path tracer] Wayback!I think I squashed the worst bugs. This is fun stuff.
parabolic_reflector_volumetric.jpg
A perfectly parallel beam send through a scattering medium. The sphere is made of a much denser medium. It is sub surface scattering with brute force BDPT.
boxy_laser.jpg
(L) [2018/03/24] [tby dawelter] [Re: My little path tracer] Wayback!I threw a "real world" scene at it. Thinking about how else I could exploit volume caustics in lack of refractive materials, I figure it would be cool to build a solar thermal power plant model. [SMILEY :-D]
Modelled in Stud.io and Cinema 4D.
677761 Triangles, most of them occluded due to the bricks being stuck in each other.
8700 SPP in 34 h  [SMILEY :?]
The scene is contained in a box with a scattering medium, obviously. Fireflys due to SDS paths? I'm not sure, but I suppose there can be paths from camera to mirror to volume scattering to mirror to sun. The sun spans a small solid angle, so there is a non-zero chance to hit it with a "forward path" like that and it would be the only technique in Veach's sense to generate the path. Makes sense?
There is also some sort of "light leak" on the first mirror.
[IMG #1 solarlego.jpg]
EDIT: Hm .. after some testing the strange "leak" appears to be no bug after all. I'll write it off as caustic cast by the mirror behind the one with the "leak".
Here is another shot. Moved the rear mirror a bit to see more of the caustics. Tuned down brightness. Converges and looks like normal illumination.
[IMG #2 solarlego_bug2.jpg]
[IMG #1]:Not scraped:
/web/20180826021718im_/http://ompf2.com/download/file.php?id=292&sid=be66a3c1c13156d6289be707ce3ae3d9
[IMG #2]:Not scraped:
/web/20180826021718im_/http://ompf2.com/download/file.php?id=295&sid=be66a3c1c13156d6289be707ce3ae3d9
(L) [2018/03/24] [ost
by dawelter] [Re: My little path tracer] Wayback!I threw a "real world" scene at it. Thinking about how else I could exploit volume caustics in lack of refractive materials, I figure it would be cool to build a solar thermal power plant model. [SMILEY :-D]
Modelled in Stud.io and Cinema 4D.
677761 Triangles, most of them occluded due to the bricks being stuck in each other.
8700 SPP in 34 h  [SMILEY :?]
The scene is contained in a box with a scattering medium, obviously. Fireflys due to SDS paths? I'm not sure, but I suppose there can be paths from camera to mirror to volume scattering to mirror to sun. The sun spans a small solid angle, so there is a non-zero chance to hit it with a "forward path" like that and it would be the only technique in Veach's sense to generate the path. Makes sense?
There is also some sort of "light leak" on the first mirror.
solarlego.jpg
EDIT: Hm .. after some testing the strange "leak" appears to be no bug after all. I'll write it off as caustic cast by the mirror behind the one with the "leak".
Here is another shot. Moved the rear mirror a bit to see more of the caustics. Tuned down brightness. Converges and looks like normal illumination.
solarlego_bug2.jpg
(L) [2018/04/18] [tby beason] [Re: My little path tracer] Wayback!Your explanation makes sense. Looks very nice! I like the planar laser beam. [SMILEY :D]
(L) [2018/04/20] [tby dawelter] [Re: My little path tracer] Wayback!Thank you beason!
Actually there was a bug. I wrongly assumed that pdfs for BSDF sampling are symmetric which they are clearly not. p(wi,wo) = p(wo,wi)? No! no! Ah well, how does one say, assumptions are the mother of all f'ups. I finally realized my mistake when ...
I implemented statistical tests (chi-sqr and t-test) for my scatter functions in the style of PBRT and Nori. I found it quite tricky. Because the chance to have at least one false alarm grows quickly with the number of tests. On the other hand, if the threshold for the p-value is made too low one runs the risk of false negatives. My solution is to change the RNG seed rather than lowering the p-value. [SMILEY :lol:]
Still have trouble with the microfacet BSDF. I get bad number of samples at the edge of a bin where incidentally wi*h = wo*h = 0. At that place the pdf also seems to diverge because of a 1/|wi*h| term. I'm unsure if bug or not. Probably not. The material looks okay ...  [SMILEY ;)]
(L) [2018/04/24] [ost
by dawelter] [Re: My little path tracer] Wayback!I haven't had implemented a refractive material till now. That needed to be rectified badly.
RefractiveCaustics.jpg
Not too shabby, I suppose.
(Although, since SDS paths are not rendered, parts are missing. I.e. in lower row, areas behind the glass. And top right, the beam behind the glass.)
(L) [2018/07/10] [tby dawelter] [Re: My little path tracer] Wayback!Meanwhile I switched to Embree. Now the renderer feels quite a bit faster. I didn't measure because my data structures had to go out of principle because they were so bad. [SMILEY :lol:]  Still have something to do about the mess I made with mixed double and float calculations that I now have ... But things seem to work again.
The new pic also shows smooth shading which wasn't implemented for BDPT when I made the first version of that pic.
[IMG #1 solarlego_embree.jpg]
[IMG #1]:Not scraped:
/web/20180826021718im_/http://ompf2.com/download/file.php?id=300&sid=be66a3c1c13156d6289be707ce3ae3d9
(L) [2018/07/10] [ost
by dawelter] [Re: My little path tracer] Wayback!Meanwhile I switched to Embree. Now the renderer feels quite a bit faster. I didn't measure because my data structures had to go out of principle because they were so bad. [SMILEY :lol:]  Still have something to do about the mess I made with mixed double and float calculations that I now have ... But things seem to work again.
The new pic also shows smooth shading which wasn't implemented for BDPT when I made the first version of that pic.
solarlego_embree.jpg
(L) [2018/07/15] [ost
by dawelter] [Re: My little path tracer] Wayback!Went for a pretty low hanging fruit: Importance sampled hdr skydomes!  [SMILEY :)]
hdrskydome.jpg
(L) [2019/01/08] [ost
by dawelter] [Re: My little path tracer] Wayback!Hello guys,
I continued working on my renderer for a bit. I added support for emissive media to my path tracing algorithms.
At the moment it can only render that particular demo medium shown in the image below. The issue is that I'm missing a general volume representation which would allow for sampling of light path start points within it. At the moment the spherical geometry is hard coded in the demo medium.
Apart from that I want to integrate OpenVDB eventually. With that I'd get a very powerful grid representation and a data loader practically for free [SMILEY :-)]
Here is the image. The medium emits Black-body radiation aka. the Planck spectrum at the given temperature. I also varied the density.
[IMG #1 emissive_lights_collage.jpg]
[IMG #1]:Not scraped:
/web/20190530223649im_/http://ompf2.com/download/file.php?id=304&sid=ef28e6c164a1fc98a94d15327d9f3410
(L) [2019/01/08] [ost
by dawelter] [Re: My little path tracer] Wayback!Hello guys,
I continued working on my renderer for a bit. I added support for emissive media to my path tracing algorithms.
At the moment it can only render that particular demo medium shown in the image below. The issue is that I'm missing a general volume representation which would allow for sampling of light path start points within it. At the moment the spherical geometry is hard coded in the demo medium.
Apart from that I want to integrate OpenVDB eventually. With that I'd get a very powerful grid representation and a data loader practically for free [SMILEY :-)]
Here is the image. The medium emits Black-body radiation aka. the Planck spectrum at the given temperature. I also varied the density.
emissive_lights_collage.jpg
(L) [2019/03/02] [ost
by dawelter] [Re: My little path tracer] Wayback!Here are some new results from my renderer. I've been messing with photon mapping ...
Check if physics are working as intended. The beam is supposed to be focused like that. I let the construction of the scene be guided by the so called Lensmaker's equation [LINK https://en.wikipedia.org/wiki/Lens_%28optics%29#Lensmaker%27s_equation].
lensescene.jpg
More or less brute-force monte-carlo SSS.[SMILEY :-)]
buddha.jpg
Spectral rendering! The spectral representation of light was there all the time. But I only really took advantage of it in the atmosphere model. Not any more! This is probably the best rendering out of ToyTrace so far [SMILEY :-)]
prism.jpg
(L) [2019/03/12] [ost
by andersll] [Re: My little path tracer] Wayback!Very cool! I love the volumetric caustics in the lens image.
(L) [2019/03/13] [ost
by cignox1] [Re: My little path tracer] Wayback!I think the jade Buddha is really amazing!
(L) [2019/03/15] [ost
by dawelter] [Re: My little path tracer] Wayback!Hi. Thank you both. Glad you like it.  [SMILEY :)]
Btw., I forgot to say, I rendered the media with "The Beam Radiance Estimate" by Jarosz (2008). I use Embree to do the beam-point query. This is possible since fairly recent support for ray-aligned-disc intersections! So, if you happen read this, Embree developer, I say thank you very much. This is a very cool feature. *thumbs*
Currently, I'm trying to implement Walter et al.'s "Microfacet Models for Refraction through Rough Surfaces" (2007). It proved to be much more difficult than I thought. Mostly, getting the expression for the density p(wo|wi) right, where wo and wi are given. I need it for BPT MIS. My renderer knows only a monolithic BSDF. It does not allocate component BxDF's like PBRT. Therefore p must include both reflection and transmission. Oh well, I think I finally got it ...
(L) [2019/03/18] [ost
by XMAMan] [Re: My little path tracer] Wayback!He dawelter,
can you say me, on which paper is the budda(Subsurface Scattering) based on? I want also to enter in this topic. If you have questens to walters microfacet, than ask me^^ I have implemented it in my Raytracer. For me was the big problem with numerical issues in the ggx-normaldistribution-function, if you use very little Roughness-Factors and a Theta-Angle, that goes nerly zero (Miconormal==Marconormal)
(L) [2019/03/19] [ost
by dawelter] [Re: My little path tracer] Wayback!We know since VCM that photon mapping can be seen as a path sampling method. Therefore, I want to first refer to
Raab et al. (2008) "Unbiased Global Illumination with Participating Media"
for the concise path integral formulation of volume rendering.
I use the stochastic progressive variant of photon mapping with the global radius decay from
Knaus & Zwicker (2011) "Progressive Photon Mapping: A Probabilistic Approach"
To generate photon paths I use Woodcock tracking, essentially. In "Spectral and Decomposition Tracking for Rendering Heterogeneous
Volumes", Kutz et al. (2017) developed many extensions of this. I use the "spectral tracking" variant. I put a volume photon on every sampled interaction point.
When tracing eye paths, I obtain volume interaction points by the same tracking methods as used for photon mapping. At these points I look for nearby photons and add their contribution. That is, in the basic variant with Point-Point 3D estimators. In "The Beam Radiance Estimate for Volumetric Photon Mapping", Jarosz et al. (2008) developed a method to gather photons along a beam. I implemented this also. The three pics are rendered with it. It is good for thin media. Actually, I like this paper a lot. Not only presents it the beam thing, in Sec. 3.3 it also has a nice derivation of the photon weights.
In denser media I don't want to look for photons all the way to the next surface intersection. So I use a piecewise-constant stochastic estimate of the transmittance along the query beam. This essentially allows to cut off the query beam after a few mean free path lengths. Inspiration for this comes from Jarosz et al. (2011) "Progressive Photon Beams" Sec. 5.2.1 and Krivanek (2014) "Unifying Points, Beams, and Paths in Volumetric Light Transport Simulation" Sec. 4.2 "Long" and "short" beams.
You can ask me about specifics. I'll try to answer [SMILEY :)]
But to be honest, this is very much brute force. If you want to render SSS in such dense media as I took for the Buddha, you might be better off using a fast approximation!
Regarding Walter et al.'s rough transmittance model: I think I finally got it right. Here is a recreation of Figure 1  [SMILEY :D]
glossy_transmissive_globe.jpg
I "only" implemented the Beckmann NDF with V-Cavity masking & shadowing function. Looks fine and I don't have to implement VNDF sampling to keep the weights low. I also noticed numerical issues with low alpha. But IIRC I get it to 1e-3 with no issue. And at that point the material looks pretty much perfectly specular. I do shading calculations in double precision though.
Btw. Since you mention GGX: Heitz recently released a paper on how to sample the VNDF for GGX more easily.
[LINK http://jcgt.org/published/0007/04/01/paper.pdf]
[LINK https://hal.archives-ouvertes.fr/hal-01509746/document]
I thought about implementing it ... For you it is probably worthwhile if you don't have it already.
(L) [2019/03/19] [ost
by dawelter] [Re: My little path tracer] Wayback!We know since VCM that photon mapping can be seen as a path sampling method. Therefore, I want to first refer to
Raab et al. (2008) "Unbiased Global Illumination with Participating Media"
for the concise path integral formulation of volume rendering.
I use the stochastic progressive variant of photon mapping with the global radius decay from
Knaus & Zwicker (2011) "Progressive Photon Mapping: A Probabilistic Approach"
To generate photon paths I use Woodcock tracking, essentially. In "Spectral and Decomposition Tracking for Rendering Heterogeneous
Volumes", Kutz et al. (2017) developed many extensions of this. I use the "spectral tracking" variant. I put a volume photon on every sampled interaction point.
When tracing eye paths, I obtain volume interaction points by the same tracking methods as used for photon mapping. At these points I look for nearby photons and add their contribution. That is, in the basic variant with Point-Point 3D estimators. In "The Beam Radiance Estimate for Volumetric Photon Mapping", Jarosz et al. (2008) developed a method to gather photons along a beam. I implemented this also. The three pics are rendered with it. It is good for thin media. Actually, I like this paper a lot. Not only presents it the beam thing, in Sec. 3.3 it also has a nice derivation of the photon weights.
In denser media I don't want to look for photons all the way to the next surface intersection. So I use a piecewise-constant stochastic estimate of the transmittance along the query beam. This essentially allows to cut off the query beam after a few mean free path lengths. Inspiration for this comes from Jarosz et al. (2011) "Progressive Photon Beams" Sec. 5.2.1 and Krivanek (2014) "Unifying Points, Beams, and Paths in Volumetric Light Transport Simulation" Sec. 4.2 "Long" and "short" beams.
You can ask me about specifics. I'll try to answer [SMILEY :)]
But to be honest, this is very much brute force. If you want to render SSS in such dense media as I took for the Buddha, you might be better off using a fast approximation!
Regarding Walter et al.'s rough transmittance model: I think I finally got it right. Here is a recreation of Figure 1  [SMILEY :D]
[IMG #1 glossy_transmissive_globe.jpg]
I "only" implemented the Beckmann NDF with V-Cavity masking & shadowing function. Looks fine and I don't have to implement VNDF sampling to keep the weights low. I also noticed numerical issues with low alpha. But IIRC I get it to 1e-3 with no issue. And at that point the material looks pretty much perfectly specular. I do shading calculations in double precision though.
Btw. Since you mention GGX: Heitz recently released a paper on how to sample the VNDF for GGX more easily.
[LINK http://jcgt.org/published/0007/04/01/paper.pdf]
[LINK https://hal.archives-ouvertes.fr/hal-01509746/document]
I thought about implementing it ... For you it is probably worthwhile if you don't have it already.
[IMG #1]:Not scraped:
/web/20201124002845im_/https://ompf2.com/download/file.php?id=317&sid=e84d52995a4022669a8cf7e0167bb92e
(L) [2019/03/19] [ost
by XMAMan] [Re: My little path tracer] Wayback!Thanks alot for your detailed answer. This will help to get a good start.
At the moment I use the Sampling-Technik from Eric Heitz descripted in this paper:
[LINK https://hal.inria.fr/hal-00996995v1/document]
The Paper from your link is then the next step. But at the moment I'm more interested in Subsurface Scattering.
(L) [2019/03/22] [ost
by papaboo] [Re: My little path tracer] Wayback!If you already have the VNDF sampling from the 2014 paper then implementing the new is trivial. It's the same set of samples with the same PDF, so you just have to copy paste the reference sample method and then you'll have faster GGX sampling [SMILEY :)]
(L) [2019/03/24] [ost
by dawelter] [Re: My little path tracer] Wayback!@XMAMan you're welcome.
Meanwhile, I rendered variations of the Buddha. This time with the new glossy transmissive material. I also added a switch in the material to force a path tracing step instead of getting Li from photons, essentially treating the BSDF as if it was a delta function. No NEE yet. I'm slightly concerned about the black rims, but I just attribute them to the lack of anything to reflect.
buddha_variations.jpg
So far so good. But my renders take awfully long. After reading through the Arnold and Manuka papers, I come to the conclusion to focus on some basics. Sane light selection, QMC sampling, path guiding, splitting and RR, are things I want to have.
(L) [2019/03/24] [ost
by dawelter] [Re: My little path tracer] Wayback!@XMAMan you're welcome.
Meanwhile, I rendered variations of the Buddha. This time with the new glossy transmissive material. I also added a switch in the material to force a path tracing step instead of getting Li from photons, essentially treating the BSDF as if it was a delta function. No NEE yet. I'm slightly concerned about the black rims, but I just attribute them to the lack of anything to reflect.
[IMG #1 buddha_variations.jpg]
So far so good. But my renders take awfully long. After reading through the Arnold and Manuka papers, I come to the conclusion to focus on some basics. Sane light selection, QMC sampling, path guiding, splitting and RR, are things I want to have.
[IMG #1]:Not scraped:
/web/20201124002845im_/https://ompf2.com/download/file.php?id=320&sid=e84d52995a4022669a8cf7e0167bb92e
(L) [2019/03/25] [ost
by knightcrawler25] [Re: My little path tracer] Wayback!>> dawelter wrote: ↑Sun Mar 24, 2019 10:04 am
@XMAMan you're welcome.
Meanwhile, I rendered variations of the Buddha. This time with the new glossy transmissive material. I also added a switch in the material to force a path tracing step instead of getting Li from photons, essentially treating the BSDF as if it was a delta function. No NEE yet. I'm slightly concerned about the black rims, but I just attribute them to the lack of anything to reflect.
buddha_variations.jpg
So far so good. But my renders take awfully long. After reading through the Arnold and Manuka papers, I come to the conclusion to focus on some basics. Sane light selection, QMC sampling, path guiding, splitting and RR, are things I want to have.
Very pretty indeed!
(L) [2019/04/09] [ost
by dawelter] [Re: My little path tracer] Wayback!Thanks, knightcrawler! [SMILEY :-)]
Here is a another 24 h rendering. I had this fun idea to make a flat earth version of the globe figure. [SMILEY :lol:]
glossytransmissiveflatearth.jpg
The dome is filled with a thin, scattering medium. This creates the glow effect around the sun. The image has other details which I like. Like the subtle shadows cast on the surrounding walls. But it proved difficult to render, i.e. it's still noisy.
(L) [2019/04/09] [ost
by dawelter] [Re: My little path tracer] Wayback!Thanks, knightcrawler! [SMILEY :-)]
Here is a another 24 h rendering. I had this fun idea to make a flat earth version of the globe figure. [SMILEY :lol:]
[IMG #1 glossytransmissiveflatearth.jpg]
The dome is filled with a thin, scattering medium. This creates the glow effect around the sun. The image has other details which I like. Like the subtle shadows cast on the surrounding walls. But it proved difficult to render, i.e. it's still noisy.
[IMG #1]:Not scraped:
/web/20201124002845im_/https://ompf2.com/download/file.php?id=323&sid=e84d52995a4022669a8cf7e0167bb92e
(L) [2020/02/08] [ost
by dawelter] [Re: My little path tracer] Wayback!Here is a first result from my attempt to implement path guiding following the "Path Guiding in Production" paper.
So far I have surface guiding only. My ingredients are:
* KD-tree inspired by "Practical Path Guiding for Efficient Light-Transport Simulation" by Müller et al.(2017)
* Gaussian Mixture model to represent incident radiance, inspired by "On-line Learning of Parametric Mixture Models for Light Transport Simulation" by Vorba et al. (2014)
* Forward path tracing only.
I rendered my take on the famous torus scene:
torus_in_cube_guided.jpg
It is actually only little better than rendering with standard path tracing, which is shown in the following:
torus_in_cube.jpg
Same number of samples.
To see that it actually does something sane, I visualize the distributions pertaining to the kd-tree cells. This one is from the final learning pass:
plot1.jpg
"Sampled" means the obvious, it's from were the samples are drawn. It is fixed. "Learned" is the distribution fitted to the iradiance obtained from the drawn samples.
(L) [2020/03/04] [ost
by dawelter] [Re: My little path tracer] Wayback!Another over-night render. Much better this time.
torus_in_cube.jpg
I switched to mixtures of Von Mises Fisher distributions and implemented tricks from the "Path Guiding in Production" paper (stochastic filtering and troughput clamping). Together with some parameter tweaking, this helped. Now I can start working on the guiding algo for volume rendering. [SMILEY :-)]
(L) [2020/03/04] [ost
by dawelter] [Re: My little path tracer] Wayback!Another over-night render. Much better this time.
[IMG #1 torus_in_cube.jpg]
I switched to mixtures of Von Mises Fisher distributions and implemented tricks from the "Path Guiding in Production" paper (stochastic filtering and troughput clamping). Together with some parameter tweaking, this helped. Now I can start working on the guiding algo for volume rendering. [SMILEY :-)]
[IMG #1]:Not scraped:
/web/20201029210043im_/https://ompf2.com/download/file.php?id=352&sid=f0ad9dca7898ea15ac86a00e0f38b2ab
(L) [2020/03/10] [ost
by koiava] [Re: My little path tracer] Wayback!That looks gorgeous. I'm wondering what would be a memory requirements of those additional data for more or less realistic production scene.
Did you also implemented sample weighting described in that paper?
(L) [2020/03/14] [ost
by dawelter] [Re: My little path tracer] Wayback!Thanks [SMILEY :-)]
I wasn't very much concerned with memory so far. However, I can tell that this scene uses 2k cells or so. Each of which has to store the coefficients for two mixtures - plus some statistics for the EM algorithm. To be frank, I haven't tested the algorithm on much else than this one scene ... It's still pretty much work in progress.
And regarding the weighting - you mean in the expectance maximization? Yes, I absolutely needed that because I only do forward path tracing. Hence, initially the distribution from which directions are sampled has nothing to do with the actual distribution of incident light. If you want to see the code, it's pretty terrible, but take a look if you like https://github.com/DaWelter/ToyTrace/blob/master/src/distribution_mixture_models.cxx#L407
Edit: Obligatory cornelbox with water. This one is unfortunate as my (poor) implementation of brute force path tracing (but still with NEE) beats my (also poor) implementation of the guided algorithm in an equal-time rendering. The guided algorithm seems more sample-efficient though.
cornelbox_water.jpg
P.S. A cell containing the mixtures and other stuff takes about 1.1kB.
(L) [2020/03/14] [ost
by dawelter] [Re: My little path tracer] Wayback!Thanks [SMILEY :-)]
I wasn't very much concerned with memory so far. However, I can tell that this scene uses 2k cells or so. Each of which has to store the coefficients for two mixtures - plus some statistics for the EM algorithm. To be frank, I haven't tested the algorithm on much else than this one scene ... It's still pretty much work in progress.
And regarding the weighting - you mean in the expectance maximization? Yes, I absolutely needed that because I only do forward path tracing. Hence, initially the distribution from which directions are sampled has nothing to do with the actual distribution of incident light. If you want to see the code, it's pretty terrible, but take a look if you like https://github.com/DaWelter/ToyTrace/blob/master/src/distribution_mixture_models.cxx#L407
Edit: Obligatory cornelbox with water. This one is unfortunate as my (poor) implementation of brute force path tracing (but still with NEE) beats my (also poor) implementation of the guided algorithm in an equal-time rendering. The guided algorithm seems more sample-efficient though.
[IMG #1 cornelbox_water.jpg]
P.S. A cell containing the mixtures and other stuff takes about 1.1kB.
[IMG #1]:Not scraped:
/web/20201029210043im_/https://ompf2.com/download/file.php?id=353&sid=f0ad9dca7898ea15ac86a00e0f38b2ab
(L) [2020/07/08] [ost
by dawelter] [Re: My little path tracer] Wayback!Had some time to also implemented the directional quad-tree representation from Müller et al. (2017) "Practical Path Guiding for Efficient Light-Transport Simulation". Here are some observations:
Mixtures of Von Mises-Fischer (MoVMF) Distributions don't seem to fit as well to incident radiance samples. Occasionally peaks are too narrow, too broad or point in the wrong directions.
Plays badly with my Vorba & Krivanek style ADRRS (Russian Roulette) implementation. I use only the learned incident radiance - Li(x,w) - estimate to determine if a path is to be terminated. So the survival probability is essentially Li(x,w)*bsdf(w,x,w')*path_contribution_up_to_x/pixel_intensity_estimate. If the estimate Li(x,w) happens to be a bad fit such that Li(x,w) is much less than the real radiance, then most paths would be terminated, leading to  huge variance and splotchy artifacts.
 Quad-trees allow to keep track of variance in each node. Thus the weight window in the ADRRS termination criterium can be scaled by an error estimate stddev(Li(x,w)). In other works, if not sure about the future path contribution, give more wiggle room.
 We can also try to direct samples to regions which are little explored, i.e. nodes with few samples. This is related to the exploitation-exploration dilemma in reinforcement learning. I implemented sort of a upper confidence bound algorithm which would normally be applied to Multi-Armed-Bandit problems. It's very hacky and unscientific.
 To get better MoVMF fits, shuffle the samples before they are fed into the EM routine. My current render loop operates in passes - small number of samples per pixel per pass. After each pass, samples are sorted into the spatial bins of the guiding data structure. Then the directional distributions of each cell are fitted to shuffled samples with the incremental algorithm.
 A variable mixing factor between SD-tree-and BSDF sampling probability is very beneficial. Also known as MIS selection probability. See Sec 10.5 in Siggraph 2019 course "Path Guiding in Production". Because, for very narrow BSDFs, we can imagine that the product of the BSDF and Li(w) looks almost like the BSDF. In the extreme case of perfectly specular reflection/transmission, only the reflected/transmitted direction will yield a non-zero value. For now, I only implemented a hacky shader-dependent mixing. In the case of my "flat earth" scene, where the camera looks through several translucent surfaces, the default 0.5 mix weight works pretty terribly.
My quad-tree code seems a little faster than the MoVMF code. Gut feeling. Didn't measure. All the evaluations of exponential functions in the MoVMF code seem to kill runtime. That is in spite of using fast exponential approximations and compilation to SIMD instructions.  [SMILEY :(]
Overall, both algorithm variants can be quite sample efficient after the training passes [SMILEY :-D] Here is comparison using the new quad tree representation. Total runtime is another matter ...
cornelbox_water_path_guiding_comparison.png
(L) [2020/07/08] [ost
by dawelter] [Re: My little path tracer] Wayback!Had some time to also implemented the directional quad-tree representation from Müller et al. (2017) "Practical Path Guiding for Efficient Light-Transport Simulation". Here are some observations:
Mixtures of Von Mises-Fischer (MoVMF) Distributions don't seem to fit as well to incident radiance samples. Occasionally peaks are too narrow, too broad or point in the wrong directions.
Plays badly with my Vorba & Krivanek style ADRRS (Russian Roulette) implementation. I use only the learned incident radiance - Li(x,w) - estimate to determine if a path is to be terminated. So the survival probability is essentially Li(x,w)*bsdf(w,x,w')*path_contribution_up_to_x/pixel_intensity_estimate. If the estimate Li(x,w) happens to be a bad fit such that Li(x,w) is much less than the real radiance, then most paths would be terminated, leading to  huge variance and splotchy artifacts.
 Quad-trees allow to keep track of variance in each node. Thus the weight window in the ADRRS termination criterium can be scaled by an error estimate stddev(Li(x,w)). In other works, if not sure about the future path contribution, give more wiggle room.
 We can also try to direct samples to regions which are little explored, i.e. nodes with few samples. This is related to the exploitation-exploration dilemma in reinforcement learning. I implemented sort of a upper confidence bound algorithm which would normally be applied to Multi-Armed-Bandit problems. It's very hacky and unscientific.
 To get better MoVMF fits, shuffle the samples before they are fed into the EM routine. My current render loop operates in passes - small number of samples per pixel per pass. After each pass, samples are sorted into the spatial bins of the guiding data structure. Then the directional distributions of each cell are fitted to shuffled samples with the incremental algorithm.
 A variable mixing factor between SD-tree-and BSDF sampling probability is very beneficial. Also known as MIS selection probability. See Sec 10.5 in Siggraph 2019 course "Path Guiding in Production". Because, for very narrow BSDFs, we can imagine that the product of the BSDF and Li(w) looks almost like the BSDF. In the extreme case of perfectly specular reflection/transmission, only the reflected/transmitted direction will yield a non-zero value. For now, I only implemented a hacky shader-dependent mixing. In the case of my "flat earth" scene, where the camera looks through several translucent surfaces, the default 0.5 mix weight works pretty terribly.
My quad-tree code seems a little faster than the MoVMF code. Gut feeling. Didn't measure. All the evaluations of exponential functions in the MoVMF code seem to kill runtime. That is in spite of using fast exponential approximations and compilation to SIMD instructions.  [SMILEY :(]
Overall, both algorithm variants can be quite sample efficient after the training passes [SMILEY :-D] Here is comparison using the new quad tree representation. Total runtime is another matter ...
[IMG #1 cornelbox_water_path_guiding_comparison.png]
[IMG #1]:Not scraped:
/web/20201029210043im_/https://ompf2.com/download/file.php?id=355&sid=f0ad9dca7898ea15ac86a00e0f38b2ab
(L) [2020/07/08] [ost
by dawelter] [Re: My little path tracer] Wayback!P.s.: Improved visualization. Spheres show the incident radiance. You can see the peaks from the main light. Overall non-zero values from indirect illumination. The boxes show the principal axes of the covariance matrices of the sample positions within the cells of the SD-tree.
guiding_distribution_visualization.jpg
(L) [2020/07/29] [ost
by dawelter] [Re: My little path tracer] Wayback!As the path guiding does not do as well as I'd like, I decide to do something else.
Built quasi monte-carlo sampling into the forward path tracer. I'm using a 21201-dimensional Sobol sequence from Joe & Kuo https://web.maths.unsw.edu.au/~fkuo/sobol/index.html. I tried a 2d base pattern with rotations and xor-scrambling, but I got biased images compared to the random sampler. A fixed number of dimensions is allocated per BSDF sample, NEE light sample, distance sample and so on. Some things are random sampled still, like the BSDF component. Due to the use of delta-tracking, distance sampling can use unbounded number of random numbers. I limit its use of dimensions to 10 or so. After that pseudo-random samples are drawn.
The base-sequence is rotated for each pixel by different amounts based on a blue noise pattern like in the Arnold paper. Correlations don't matter here as each pixel should converge individually.
And it actually works.  [SMILEY :lol:]
qmc_vs_random.jpg
Some more pics:
* Basic wine glass. Rendered with photon mapping. Nothing fancy
https://www.dropbox.com/s/owyeb63x1zhkms4/wineglass.jpg
* Dragon with more stuff. Forward path tracing with QMC. Using the maximum-roughness trick from Arnold to prevent fireflys. https://www.dropbox.com/s/q2is7g0ub8bxixp/xyzrgb_dragon_extended_qmc1024spp.jpg
(L) [2020/07/29] [ost
by dawelter] [Re: My little path tracer] Wayback!As the path guiding does not do as well as I'd like, I decide to do something else.
Built quasi monte-carlo sampling into the forward path tracer. I'm using a 21201-dimensional Sobol sequence from Joe & Kuo https://web.maths.unsw.edu.au/~fkuo/sobol/index.html. I tried a 2d base pattern with rotations and xor-scrambling, but I got biased images compared to the random sampler. A fixed number of dimensions is allocated per BSDF sample, NEE light sample, distance sample and so on. Some things are random sampled still, like the BSDF component. Due to the use of delta-tracking, distance sampling can use unbounded number of random numbers. I limit its use of dimensions to 10 or so. After that pseudo-random samples are drawn.
The base-sequence is rotated for each pixel by different amounts based on a blue noise pattern like in the Arnold paper. Correlations don't matter here as each pixel should converge individually.
And it actually works.  [SMILEY :lol:]
[IMG #1 qmc_vs_random.jpg]
Some more pics:
* Basic wine glass. Rendered with photon mapping. Nothing fancy
https://www.dropbox.com/s/owyeb63x1zhkms4/wineglass.jpg
* Dragon with more stuff. Forward path tracing with QMC. Using the maximum-roughness trick from Arnold to prevent fireflys. https://www.dropbox.com/s/q2is7g0ub8bxixp/xyzrgb_dragon_extended_qmc1024spp.jpg
[IMG #1]:Not scraped:
/web/20201029210043im_/https://ompf2.com/download/file.php?id=357&sid=f0ad9dca7898ea15ac86a00e0f38b2ab
(L) [2020/10/07] [ost
by XMAMan] [Re: My little path tracer] Wayback!The Dragon looks good. Which Brdf-Model are you using?
back