Re: Vertex merging paper and SmallVCM back

Board: Home Board index Raytracing Links & Papers

(L) [2012/10/14] [ost by Zelcious] [Re: Vertex merging paper and SmallVCM] Wayback!

The last scene, with the two spheres and the blue and yellow walls, how are the lights in the roof constructed? The path tracing image looks really off to me but I guess there is a natural explanation to that.
My best guess would be a lens in the middle not covering the entire hole in combination with next event estimation. Are the sides of the tube reflective?

If my guess are correct it would quite interesting to see if you turned those lenses into portals in combination with path tracing. Perhaps even importance sample every specular surface. (to a lesser degree)
(L) [2012/10/14] [ost by keldor314] [Re: Vertex merging paper and SmallVCM] Wayback!

PPM (and VCM) can be made to be unbiased simply by having each photon store information about the previous hit, rather than the current hit.  Then when the eye path lands close to a photon, you simply shoot a ray toward the photon's source (be it directly at a light or simply the last thing the photon bounced off of).  This allows you to use an arbitrarily large search radius for nearby photons, at the cost of tracing one extra ray per sample.  Does anyone see any flaws with this idea?

You could further classify photons into several groups based on their intensity, with brighter photons being searched for at a larger radius, while their contribution would be divided by the area of the search area (volume?).  This should give some of the benefits of MLT.
(L) [2012/10/14] [ost by ingenious] [Re: Vertex merging paper and SmallVCM] Wayback!

>> madd wrote:Are any of the test scenes available somewhere?
The Mirror balls scene is available on Toshiya Hachisuka's web page. The Car scene I downloaded from [LINK http://artist-3d.com/free_3d_models/dnm/model_disp.php?uid=3394 here] and tweaked materials and shading normals a bit. If there's interest, I can release that. The Living room and Bathroom cannot be released, unfortunately.
 >> Dietger wrote:Let me try to clarify what I meant here. Of course you are right. The initial noise will be less, the added bias will go away and the algorithm does inherit BDPT's high asymptotic performance. But in practice nobody waits that long! If you would be willing to wait that long, you might as well have started with vanilla BDPT to begin with! After all, the whole reason we use PPM in the first place was because BDPT is too slow for some stuff.
You sometimes actually wait that long. It can easily happen that some caustics look a bit blurry initially, then you wait for some time for them to get sharper (and also to get rid of noise on the diffuse surfaces), and in the end the image looks overall much better than it would look with BPT. Such scenes are shown in the PPM and SPPM papers, and you can also see plenty on the LuxRender forum.
 >> Dietger wrote:
In practice, we render only for a limited time (probably way too short for the asymptotic properties to have a significant impact). In that finite render time, PPM starts off doing things badly (bias artifacts around small objects) and then hopes to fix it later. The more PPM screws up initially, the longer it will take to fix this later. If PPM would just have started with an appropriately small but fixed radius for each pixel, it could realize a better quality/bias trade off in the same render time. Unfortunately we usually don't know this optimal radius (too big => bias, too small => noise), thats why we are stuck with this reducing radius. The reducing radius should free is from having to guess the optimal radius, at the cost of some extra bias within the same render time. Unfortunately, it turns out that the extra bias/noise of using a too big/small initial radius can still be pretty significant, even when rendering for reasonably long time. This is why the initial radius is still such an important parameter for the PPM algorithm, PPM's theoretical consistency notwithstanding.
I completely agree -- the choice of filter support in (P)PM is crucial for the quality. And in VCM it plays an even more interesting role - it directly controls the relative weight of vertex connection and merging techniques. A smaller radius means more weight to vertex connections. Arguably, with VCM the choice of radius is less crucial than in PPM, because it does not impact the quality as directly as in PPM. We actually argue for choosing a smaller initial radius than for PPM, as this results in less bias, and diffuse transport (which suffers most from small radii) is well handled by vertex connection. Having all that said, adaptive bandwidth choice is definitely an interesting, and also orthogonal, problem. It is not addressed in that paper and can additionally improve quality.
 >> Dietger wrote:But than again, Kelemen-MLT did manage to find them. So assuming that Veach-MLT has a reasonable probability of throwing away the first few eye vertices, it should have a decent probability of finding these caustics at least once in a while. And when MLT finds them, it will hang on! So that's why I expected at least some ugly bright splotches here and there  
More worrying, in the 'Living room' scene, PT and BDPT have no problem at all sampling the direct light on the table in the mirror, so a properly set up Veach-MLT mutation should also have no problem there. Unless the mutation selection probabilities are set up exceptionally bad, I think something is off. Anyhow, I shouldn't complain, as this is probably the only half-decent Veach-MLT implementation out there
As I said, we will rerun the Mitsuba MLT tests with the latest version. Actually, Wenzel was also suspicious about the correctness of the preliminary Veach MLT implementation we used, which he was so kind to provide well before the release of Mitsuba 0.4, which reportedly includes numerous fixes.
(L) [2012/10/14] [ost by ingenious] [Re: Vertex merging paper and SmallVCM] Wayback!

>> Zelcious wrote:The last scene, with the two spheres and the blue and yellow walls, how are the lights in the roof constructed? The path tracing image looks really off to me but I guess there is a natural explanation to that.
My best guess would be a lens in the middle not covering the entire hole in combination with next event estimation. Are the sides of the tube reflective?

If my guess are correct it would quite interesting to see if you turned those lenses into portals in combination with path tracing. Perhaps even importance sample every specular surface. (to a lesser degree)
Each lamp is constructed like this: The tube is reflective. There is a lens at the bottom of the tube, and the top is actually the diffuse ceiling. And yes, it doesn't cover the whole tube, hence the nice circles with direct illumination in the PT image. Inside the tube there is a very small (floating) horizontal area light with emission downwards. The bright light source reflections that are only seen in the PPM and VCM images is the ceiling inside the lamps being strongly illuminated via light reflected off the lens back inside.

Making a portal as the lens will probably help for BPT, but I'm not sure how much.
 >> keldor314 wrote:PPM (and VCM) can be made to be unbiased simply by having each photon store information about the previous hit, rather than the current hit.  Then when the eye path lands close to a photon, you simply shoot a ray toward the photon's source (be it directly at a light or simply the last thing the photon bounced off of).  This allows you to use an arbitrarily large search radius for nearby photons, at the cost of tracing one extra ray per sample.  Does anyone see any flaws with this idea?
Indeed you can reduce bias this way, and this has actually been tried before. See Ph. Bekaert's tech report "A Custom Designed Density Estimator for Light Transport". But you cannot make it fully unbiased because you need to compute the probability that a "photon" falls inside your search range, i.e. the acceptance probability integral that appears in the path pdf of the vertex merging formulation. And this integral cannot be computed analytically in general, because it depends on the visibility function, which in turn depends on the scene geometry, i.e. it's arbitrary. Analytic computation may be possible in certain cases, but I doubt it will be worth the pain/overhead in practice.
 >> keldor314 wrote:You could further classify photons into several groups based on their intensity, with brighter photons being searched for at a larger radius, while their contribution would be divided by the area of the search area (volume?).  This should give some of the benefits of MLT.
This sounds interesting, but I'm not immediately sure this will be very useful. For example this will blur sharp caustics/shadows a lot.
(L) [2012/10/15] [ost by dbz] [Re: Vertex merging paper and SmallVCM] Wayback!

Interesting work. How would the different methods perform on outdoor scenes?
(L) [2012/10/15] [ost by beason] [Re: Vertex merging paper and SmallVCM] Wayback!

Wow, very cool! The paper looks great, I love the javascript image comparison, plus SmallVCM looks really neat and comprehensive! I look forward to examining it further. Congrats and great work!

I am curious though. The SmallVCM webpage makes a reference to "SimplePT", but I cannot find what this is on Google. I have an idea what you may be referring to, but I'm not sure [SMILEY :)]
(L) [2012/10/15] [ost by tomasdavid] [Re: Vertex merging paper and SmallVCM] Wayback!

Oops, that would be typo on my side.
Kinda kept confusing the two words throughout the project, as SmallVCM was becoming less and less small (and, arguably, less and less simple). [SMILEY :-D]

Should be fixed (to smallpt) now.
(L) [2012/10/15] [ost by ingenious] [Re: Vertex merging paper and SmallVCM] Wayback!

>> dbz wrote:Interesting work. How would the different methods perform on outdoor scenes?
Well, it depends on the scene, illumination and viewpoint. On most typical scenes, VCM will most likely look like BPT, since PPM is not really good in those. In general, it should take the best from a BPT image and a PPM image.
(L) [2012/10/15] [ost by beason] [Re: Vertex merging paper and SmallVCM] Wayback!

>> tomasdavid wrote:Should be fixed (to smallpt) now.
Ah, thank you!

In case anyone else is interested, here are line counts:
Code: [LINK # Select all]    67 ray.hxx
    68 materials.hxx
    72 renderer.hxx
    81 eyelight.hxx
    81 frame.hxx
   128 camera.hxx
   192 rng.hxx
   216 hashgrid.hxx
   238 pathtracer.hxx
   261 framebuffer.hxx
   261 utils.hxx
   268 geometry.hxx
   388 config.hxx
   396 html_writer.hxx
   421 math.hxx
   488 scene.hxx
   512 lights.hxx
   578 bsdf.hxx
   948 vertexcm.hxx
  5664 total

Thanks for sharing a sample implementation of several techniques, including your new one!
(L) [2012/11/20] [ost by Dade] [Re: Vertex merging paper and SmallVCM] Wayback!

Yesterday I have rendered my very first image with BiDir Vertex Merging. It is pretty much a straight port of SmallVCM code plus some of the old code used for SPPM. This is a 30secs rendering with Metropolis sampler + BiDir on CPU:
bidir.jpg
And this a 30secs rendering with Metropolis sampler + BiDir with VM on CPU:
bidir-vm.jpg
Notice the classic SDS paths with caustics reflected on the mirror. I have still an half millions of things to tune and fix (for instance, caustics look over-bright, probably something wrong with MIS weights) however it allows me to start to collect some experience with BiDir with VM on the field:

1) VM is very easy to implement on top of an existing BiDir as an option. So easy that you may want to implement it even if it isn't strictly required by the kind of images you are going to render. It is just an useful option available for the users.

2) Not surprising, it shares with SPPM some of the implementation critical points (time to spent to create the k-NN accelerator with light vertices, lookup time of k-NN vertices, memory usage of the k-NN accelerator, etc.).

3) It shares with SPPM also some rendering characteristics. For instance, large initial search radius lead to a large initial bias (i.e. blurred caustics), good for previews (i.e. less perceived high frequency noise in the early stage of the rendering), etc.
(L) [2012/11/20] [ost by ingenious] [Re: Vertex merging paper and SmallVCM] Wayback!

Nice images indeed  [SMILEY ;)]
 >> Dade wrote:1) VM is very easy to implement on top of an existing BiDir as an option. So easy that you may want to implement it even if it isn't strictly required by the kind of images you are going to render. It is just an useful option available for the users.
That's true. If you have a working bidirectional path tracer, which is the most challenging part to get right, then it's easy to add [SMILEY :)] The main difference is that you need to trace all light paths first.
 >> Dade wrote:2) Not surprising, it shares with SPPM some of the implementation critical points (time to spent to create the k-NN accelerator with light vertices, lookup time of k-NN vertices, memory usage of the k-NN accelerator, etc.).
It's a bit more demanding than SPPM in that it performs the merging on every eye sub-path vertex. In my implementation, the full VCM can be 2-3x slower per iteration than (S)PPM, depending on the scene, but providing much better quality most often.
 >> Dade wrote:3) It shares with SPPM also some rendering characteristics. For instance, large initial search radius lead to a large initial bias (i.e. blurred caustics), good for previews (i.e. less perceived high frequency noise in the early stage of the rendering), etc.
One interesting thing in VCM is that the radius provides you with a direct control over the relative weights of the vertex connection and vertex merging techniques. Very small radius essentially gives you bidirectional path tracing. In order to get progressive photon mapping, however, often you need to set the radius to something impractically large, since vertex connection is very good for "long" diffuse transport (e.g. the teaser image in the paper).

One visual drawback of VCM is that it exhibits two types of noise/artifacts - mid-frequency splotches from merging and high-frequency noise from vertex connections. This is a visual effect of the correlated/uncorrelated sampling that is not currently taken into account by the weighting, as (to my knowledge) there's no good mathematical model for it. On very difficult scenes, it can be a bit unpleasant. Though you can also get high frequency noise in SPPM on glossy surfaces.
(L) [2012/11/20] [ost by ypoissant] [Re: Vertex merging paper and SmallVCM] Wayback!

I can't see the pictures. I get a "you are not authorized to download the resource". Do I need to be a member of the luxrender forum?
(L) [2012/11/20] [ost by Dade] [Re: Vertex merging paper and SmallVCM] Wayback!

>> ypoissant wrote:I can't see the pictures. I get a "you are not authorized to download the resource". Do I need to be a member of the luxrender forum?
Ah, sorry, I edited the post and it should now work for everyone. BTW, this is the rendering with the MIS weights fixed:

[IMG #1 psor-cube-vm.png]
[IMG #1]:Not scraped: /web/20200811102213im_/http://ompf2.com/download/file.php?id=94&sid=b586fbf28f7e7eb6b8934cdb02c4e6b0
(L) [2012/11/20] [ost by ypoissant] [Re: Vertex merging paper and SmallVCM] Wayback!

Impressive.

There are usually some physical explanation to this but the lit parts on the floor are much brighter in the mirror reflection than in front of the mirror and I can't find an explanation to this expecially given that the floor material seems to be Lambertian or almost. Could that be the MIS again? They looked more of the same intensity in your first renders.
(L) [2012/11/21] [ost by Dade] [Re: Vertex merging paper and SmallVCM] Wayback!

>> ypoissant wrote:Could that be the MIS again? They looked more of the same intensity in your first renders.
Yes, very likely  [SMILEY :oops:] Getting all MIS weights right is always a bit challenging  [SMILEY :D]
(L) [2012/12/02] [ost by kaplanyan] [Re: Vertex merging paper and SmallVCM] Wayback!

>> Dietger wrote:
In practice, we render only for a limited time (probably way too short for the asymptotic properties to have a significant impact). In that finite render time, PPM starts off doing things badly (bias artifacts around small objects) and then hopes to fix it later. The more PPM screws up initially, the longer it will take to fix this later. If PPM would just have started with an appropriately small but fixed radius for each pixel, it could realize a better quality/bias trade off in the same render time. Unfortunately we usually don't know this optimal radius (too big => bias, too small => noise), that's why we are stuck with this reducing radius. The reducing radius should free is from having to guess the optimal radius, at the cost of some extra bias within the same render time. Unfortunately, it turns out that the extra bias/noise of using a too big/small initial radius can still be pretty significant, even when rendering for reasonably long time. This is why the initial radius is still such an important parameter for the PPM algorithm, PPM's theoretical consistency notwithstanding.
Dietger
Maybe my recent paper could be an interesting reading on this topic: [LINK http://cg.ibds.kit.edu/APPM.php]
(L) [2012/12/07] [ost by tomasdavid] [Re: Vertex merging paper and SmallVCM] Wayback!

Anton: Have you looked at combining your stuff with VCM?
(L) [2012/12/07] [ost by Dade] [Re: Vertex merging paper and SmallVCM] Wayback!

>> kaplanyan wrote:Maybe my recent paper could be an interesting reading on this topic: [LINK http://cg.ibds.kit.edu/APPM.php]
Thanks for sharing, very interesting.
(L) [2012/12/07] [ost by dr_eck] [Re: Vertex merging paper and SmallVCM] Wayback!

>> kaplanyan wrote:
Maybe my recent paper could be an interesting reading on this topic: [LINK http://cg.ibds.kit.edu/APPM.php]
Nice application of error estimation to PPM!  I appreciate the realistic assessment of the method's limitations and look forward to seeing these limitations removed.  The world (or at least this community) needs a good method for choosing alpha and k.
(L) [2012/12/08] [ost by kaplanyan] [Re: Vertex merging paper and SmallVCM] Wayback!

>> tomasdavid wrote:Anton: Have you looked at combining your stuff with VCM?
Not yet. These two ideas were born in parallel. I think the combination with the PPM part of VCM should be straightforward. However one would still need to think about the application to the BDPT part of VCM.

back