Re: My first Supersize-Image back

Board: Home Board index Raytracing Visuals, Tools, Demos & Sources

(L) [2019/05/03] [ost by XMAMan] [Re: My first Supersize-Image] Wayback!

Ok here is my first little step to subsurface scattering. I have extended my bidirectional pathtracing with Homogen-Media-Support. So I have now a unbiased estimator. The next steps will be Rayleigh,Mie-Scattering and inhomogeneous media.
(L) [2019/06/15] [ost by XMAMan] [Re: My first Supersize-Image] Wayback!

Step 2: Rayleigh- and Mie-Scattering.

For Rayleigh-Phase-Function-Direction-Sampling I used 'Importance sampling the Rayleigh phase function from jeppe' (Inverse CDF)
For the Mie I use tabulation and the formula from 'ScratchAPixel Simulating the Colors of the Sky'

For distancesamling, I use Raab-Woodcock 'Unbiased Global Illumination with Participating Media - Raab et al (2008)'.

For the attenuation-term I use a pre-computed table. A little Idea how to implement I used the paper from Nishita 'Display Method of the Sky Color Taking into Account Multiple Scattering 1996'.

The next step is adding clouds.
(L) [2019/06/24] [ost by XMAMan] [Re: My first Supersize-Image] Wayback!

In the next Image you can see the difference between single-scattering and multiple-scattering
Image 1: Bidirectional Pathtracing (MultipeScattering)
Image 2: Sampling a Particle-Point on the Edge from a Eye-Subpaht (Dont know the offizial name from this path-creation-sampling-routine) (MultipeScattering)
Image 3: Sampline a Particle-Point only on the edge from the Primary-Ray (Single Scattering)
Image 4: Create 20 Segments on the Primary-Ray-Edge and connect it to the sun (Single Scattering)
(L) [2019/08/05] [ost by XMAMan] [Re: My first Supersize-Image] Wayback!

This is the well known water-cornellbox  [SMILEY :)]
(L) [2019/08/06] [ost by XMAMan] [Re: My first Supersize-Image] Wayback!

And this the Mirror-Box. Got the idea from dawelter^^
(L) [2019/09/06] [ost by dawelter] [Re: My first Supersize-Image] Wayback!

Nice progress!  [SMILEY :)]
(L) [2019/10/03] [ost by XMAMan] [Re: My first Supersize-Image] Wayback!

Ok. Here is my next Participating-Media-Image: Clouds [SMILEY :o] [SMILEY :o] [SMILEY :o] . For the sky I use the Model from Nishita and for the clouds the cumulus-cloud-modul from David Ebert.

The images a rendered with 200 Samples per Pixel. I use lightsourcesampling on random points on the eye-path-segments and lightsourcesampling also for the eye-path-points to create a path from the camera to the light.
(L) [2019/10/30] [ost by XMAMan] [Re: My first Supersize-Image] Wayback!

The next little step: Unifying Points, Beams, and Paths in Volumetric Light Transport Simulation (UPBP)

Finally after writing 435 UnitTests, to test every single Class/Sampler/SubPathSampler/FullPathSampler/PathcontributionCheck/Pdf match with Histogram-Test, I can now finally say, that I have mastered UPBP. I have written a importer for the scene-desciption-file, which are also is used in the stilllife-scene from SmallUPBP. To compare my result with a reference.

Can someone say me, what is the next step in the topic of Participating Media? Better Distancesampling? Better importancesampling durring segmentsampling?

Here is a image with 2150 samples.
(L) [2019/11/03] [ost by ultimatemau] [Re: My first Supersize-Image] Wayback!

>> XMAMan wrote: ↑Wed Oct 30, 2019 6:59 pm
The next little step: Unifying Points, Beams, and Paths in Volumetric Light Transport Simulation (UPBP)

Finally after writing 435 UnitTests, to test every single Class/Sampler/SubPathSampler/FullPathSampler/PathcontributionCheck/Pdf match with Histogram-Test, I can now finally say, that I have mastered UPBP. I have written a importer for the scene-desciption-file, which are also is used in the stilllife-scene from SmallUPBP. To compare my result with a reference.

Can someone say me, what is the next step in the topic of Participating Media? Better Distancesampling? Better importancesampling durring segmentsampling?

Here is a image with 500 samples.

Hi,

Very nice progress, really impressed! The next big / most state-of-the-art step you can take is better: distance sampling + sampling scatter direction (phase function sampling) + better Russian roulette and possibly path-splitting! Sounds like a lot?

Check out this paper, as far as I'm concerned it's in the top 3 papers of the year:
https://cgg.mff.cuni.cz/~jaroslav/papers/2019-volume-path-guiding/index.html

For more related papers check out "Path Guiding in Production" from Siggraph this year, must read by the looks of your progress so far:
https://jo.dreggn.org/path-tracing-in-production/2019/guiding.pdf

If you're interested in "Surface Path Guiding" too after reading all that, I'd suggest checking out the links in chapter / section 10 in "Path Guiding in Production". Follow the links. "Practical Path Guiding" is a paper from 2017, but got some nice additions from a more recent paper "Neural Importance Sampling" to optimize the MIS weights during training. Don't shy away from this one, it has source code you can look at too.

Have fun!

Cheers
(L) [2019/11/03] [ost by XMAMan] [Re: My first Supersize-Image] Wayback!

He ultimatemau, thank you for you answer. All the Papers sounds good. I will look on that.
(L) [2020/01/08] [ost by dawelter] [Re: My first Supersize-Image] Wayback!

Hi!

This is indeed very impressive, XMAMan!

Say, have you tried to render the atmosphere/cloud scene with UPBP? I have to wonder though how well the algorithm copes with "nature" scenes.

Btw, @ ultimatemau, thanks for the tip regarding "Path Guiding in Production". I found it's really a great overview and there are some tips in it which are not in the original papers [SMILEY :-)]
(L) [2020/01/20] [ost by XMAMan] [Re: My first Supersize-Image] Wayback!

@Dawelter: I have tryed UPBP on the Sky/Cloud-Scene. It works yes but in this scene I use a Directional-Lightsource for the Sun. If I create a volumetric Photonmap for the Beam2Beam, Point2Point and Beam2Point-Querrys, than only few Photons are stored in the View-Frustum. This is, because the View-Frustum is very small in compare with the hole Atmosphere from the Earth. If I would use Spot-Light, which spots the View-Frustum or a Directional-Light with Importance-Sampling, than it would perform better. But at the moment, I have not implemented Importance-Sampling für the SubLight-Path-Creation-Step for Directional-Lights. If I have implemented this feature, than I can test UPBP again. At the moment, it works as goog as I don't would use a Photonmap.
(L) [2020/02/08] [ost by dawelter] [Re: My first Supersize-Image] Wayback!

Yes. You are right. Big scene plus directional light is a problem.

Would you spend more time on scenes in general? I believe you have many cool algorithms, but little material to exploit them [SMILEY ;-)]
(L) [2020/11/03] [ost by XMAMan] [Re: My first Supersize-Image] Wayback!

At the moment I'm working on Hair- and SSS-Bsdfs.

I have improved my parallax-mapping and now the pillar on the left looks better:

[IMG #1 Image]
[IMG #1]:[IMG:#0]
(L) [2020/12/05] [ost by XMAMan] [Re: My first Supersize-Image] Wayback!

Here the next little improvement. I got the tip from papaboo in this post:
 >> papaboo wrote: ↑Fri Mar 22, 2019 10:53 am
If you already have the VNDF sampling from the 2014 paper then implementing the new is trivial. It's the same set of samples with the same PDF, so you just have to copy paste the reference sample method and then you'll have faster GGX sampling

I have replaced the Sampling-routine from the visible normal from slopespace-sampling to projected area-sampling and now the brdf-sampling is twice as fast.

[IMG #1 Image]
[IMG #1]:Not scraped: https://web.archive.org/web/20210410223244im_/https://i.ibb.co/sHxQKXb/Microfacet-Sphere.jpg
(L) [2020/12/19] [ost by XMAMan] [Re: My first Supersize-Image] Wayback!

I have now implemented Hdr-Environmentlighting. Here is a scene with blue sky

[IMG #1 Image]
[IMG #1]:Not scraped: https://web.archive.org/web/20210410223244im_/https://i.ibb.co/qpH8S9B/Mirrors-Edge-1000-Samples.jpg

back