Progressive Photon Mapping back

Board: Board index ‹ Ray tracing ‹ Visuals

(L) [2009/06/04] [oodmb] [Progressive Photon Mapping] Wayback!

I extended my renderer TLRcam ([LINK http://tlrcam.blogspot.com/] [LINK http://people.brandeis.edu/~mirman] to use the progressive photon mapping algorithm.  While I am not quite doing it in the same way as the paper describes, my method has the same, if not better big-O time per sample. A proof has yet to be done for that. Right off the bat I can think of a couple of optimizations: the first two nodes on the photon can be ignored and not added to the map, and the direct lighting can be computed explicitly. I say this because it seems as though this algorithm is much slower for direct lighting, where graininess and complex lighting is not a problem, and fails completely for antialiasing, which is only really important under direct lighting.
After about 15 million samples and 7 hours:
[IMG #1 Image]

4000 samples:
[IMG #2 Image]

17000 samples:
[IMG #3 Image]

1mill samples:
[IMG #4 Image]

Has anybody else tried implementing this yet?
[IMG #1]:Not scraped: https://web.archive.org/web/20090608034012im_/http://i43.tinypic.com/zwl3kp.png
[IMG #2]:Not scraped: https://web.archive.org/web/20090608034012im_/http://i41.tinypic.com/zbsm.png
[IMG #3]:Not scraped: https://web.archive.org/web/20090608034012im_/http://i40.tinypic.com/34do0tx.png
[IMG #4]:Not scraped: https://web.archive.org/web/20090608034012im_/http://i39.tinypic.com/amedeh.png
(L) [2009/06/04] [beason] [Progressive Photon Mapping] Wayback!

I'm not seeing any images.
They work fine on your blog though:
[LINK http://tlrcam.blogspot.com/2009/06/progressive-photon-mapping-2.html]
edit: working now
(L) [2009/06/04] [oodmb] [Progressive Photon Mapping] Wayback!

I extended my renderer TLRcam ([LINK http://tlrcam.blogspot.com/] [LINK http://people.brandeis.edu/~mirman/] to use the progressive photon mapping algorithm.  While I am not quite doing it in the same way as the paper describes, my method has the same, if not better big-O time per sample. A proof has yet to be done for that. Right off the bat I can think of a couple of optimizations: the first two nodes on the photon can be ignored and not added to the map, and the direct lighting can be computed explicitly. I say this because it seems as though this algorithm is much slower for direct lighting, where graininess and complex lighting is not a problem, and fails completely for antialiasing, which is only really important under direct lighting.
After about 15 million samples and 7 hours:
[IMG #1 Image]

4000 samples:
[IMG #2 Image]

17000 samples:
[IMG #3 Image]

1mill samples:
[IMG #4 Image]

Has anybody else tried implementing this yet?
[LINK http://tlrcam.blogspot.com/2009/06/progressive-photon-mapping-2.html]
[IMG #1]:Not scraped: https://web.archive.org/web/20100611023641im_/http://i43.tinypic.com/zwl3kp.png
[IMG #2]:Not scraped: https://web.archive.org/web/20100611023641im_/http://i41.tinypic.com/zbsm.png
[IMG #3]:Not scraped: https://web.archive.org/web/20100611023641im_/http://i40.tinypic.com/34do0tx.png
[IMG #4]:Not scraped: https://web.archive.org/web/20100611023641im_/http://i39.tinypic.com/amedeh.png
(L) [2009/06/04] [graphicsMan69] [Progressive Photon Mapping] Wayback!

Can you characterize where the time is spent?  Is it tracing photons, building photon maps, or performing radiance estimates?  How many samples per pixel did you shoot?  What was your initial gather radius, and what is your alpha (the radius shrink factor)?  Is 15 million samples the number of photons collected?
If that's the case, 15 million photons should be doable in a few seconds on a modern machine on a scene as simple as the cornell box.  Are you using a photon map at all?  A kd-tree or octree or something?  If you're doing brute force k-nearest-neighbors, I think it would explain the slowness.
(L) [2009/06/05] [oodmb] [Progressive Photon Mapping] Wayback!

Most of the time seems to be tracing photons, and collecting them: although its a little backwards on my set up:  you only build the KD tree once.  the problem seems to be mostly that I wasn't using any Russian roulette, when fixed i went from 400samps/ms to 4000samps/ms.  Either way, everything is kinda slow in my renderer, and I don't really plan on paying much attention to that I haven't made many optimizations.  What I'd like to do is to program this on my renderer, show that my algorithm is faster than simple progressive photon mapping, and then if need be, do it in another renderer.  
I'm using a KD tree for the nearest neighbors.
(L) [2009/06/05] [graphicsMan69] [Progressive Photon Mapping] Wayback!

>> oodmb wrote:Most of the time seems to be tracing photons, and collecting them: although its a little backwards on my set up:  you only build the KD tree once.  the problem seems to be mostly that I wasn't using any Russian roulette, when fixed i went from 400samps/ms to 4000samps/ms.  Either way, everything is kinda slow in my renderer, and I don't really plan on paying much attention to that I haven't made many optimizations.  What I'd like to do is to program this on my renderer, show that my algorithm is faster than simple progressive photon mapping, and then if need be, do it in another renderer.  
I'm using a KD tree for the nearest neighbors.
Since you only do the kd-tree once, I'm assuming it's over what Toshiya calls "hitpoint"s instead of photons.  The idea seems a bit counterintuitive to me.  Certainly it's a big win to only build one kd-tree, but it seems like your photon collecting will suffer.  If you figure out that your method works better though, I'd like to hear the details!
(L) [2009/06/05] [neos] [Progressive Photon Mapping] Wayback!

i dont know man, it doesnt seem even remotely right, 7h for cornell box in that quality?
(L) [2009/06/05] [oodmb] [Progressive Photon Mapping] Wayback!

I added explicit direct lighting, and improved the photon map, and after about 4mill photons:
[IMG #1 Image]

Like I said, my renderer is very slow regardless, and this stuff is only like a day old.  I'd like to do it eventually in another renderer, or elicit the help of somebody writing another renderer eventually.
But yeah, getting rid of having to do a KD tree build every time should be a big plus... mostly in the way of the sorting.  There is still a worst time log(n) computation per every photon, but worst time means definitely worst time.
[IMG #1]:Not scraped: https://web.archive.org/web/20090608034012im_/http://i43.tinypic.com/34pdylw.png
(L) [2009/06/05] [graphicsMan69] [Progressive Photon Mapping] Wayback!

So is your kd-tree over hitpoints and not photons?  And are you performing nearest neighbors or a radius search?
(L) [2009/06/06] [oodmb] [Progressive Photon Mapping] Wayback!

Radius search, sort of.
(L) [2009/06/06] [neos] [Progressive Photon Mapping] Wayback!

btw. talking about performance, what not everyone could know, the last java (1.6.0_14) has working escape analysis.
What it doees is when VM sees the object that doesn't go out of the scope it allocates him on the stack. And well it really works.
Now you can do for ex. all vector arithmetics using new objects as the temporary results.
like:
Code: [LINK # Select all]public Vec add(Vec v) {
    return new Vec(x + v.x, y + v.y, z + v.z);
}

is not a performance penalty anymore.
To turn it on, you have to use the flag: -XX:+DoEscapeAnalysis.
Regards.
(L) [2009/06/06] [oodmb] [Progressive Photon Mapping] Wayback!

ooohhhh, that looks cool, I need to try that!
(L) [2009/06/06] [Stereo] [Progressive Photon Mapping] Wayback!

oodmb: That last picture you posted looks very nice. Given the fact that this is progressive photon mapping, do you have some sort of viewing application that allows one to see how the picture is refined? That'd be very interesting ...
 >> neos wrote:btw. talking about performance, what not everyone could know, the last java (1.6.0_14) has working escape analysis.
What it doees is when VM sees the object that doesn't go out of the scope it allocates him on the stack. And well it really works.
Haha, finally! So, it took them ages to release this feature ... A former colleague of mine (who's now with Google in Zürich) wrote his disseration on that and was the one to create a working prototype implementation based on Sun's Hotspot VM. [LINK http://ssw.jku.at/Teaching/PhDTheses/Kotzmann/ JKU] ftw!
Nevertheless, I'll probably stay the C++ SIMD tamer I am for a few more years to come.
(L) [2009/06/06] [oodmb] [Progressive Photon Mapping] Wayback!

Well, I wrote it as an option for my unbiased renderer, which progressively updates the display picture and saves the image every so often, so yes.
(L) [2009/06/06] [neos] [Progressive Photon Mapping] Wayback!

>> Stereo wrote:Haha, finally! So, it took them ages to release this feature ... A former colleague of mine (who's now with Google in Zürich) wrote his disseration on that and was the one to create a working prototype implementation based on Sun's Hotspot VM. [LINK http://ssw.jku.at/Teaching/PhDTheses/Kotzmann/ JKU] ftw!
Nevertheless, I'll probably stay the C++ SIMD tamer I am for a few more years to come.
Cool for him [SMILEY :)].
And yeah, maybe I didn't sound that serious in my last post but that update is really epic. What sucks though, is that mac users have to wait for java from apple [SMILEY :(].
Sorry for spam.
(L) [2009/06/12] [Guest] [Progressive Photon Mapping] Wayback!

I could not resist to speak up, so here it is.
I have tried several different data structures over hit points after the publication,
and the best one so far is a spatial hashing. I am particularly using this method,
"Optimized Spatial Hashing for Collision Detection of Deformable Objects" [LINK http://www.beosil.com/download/CollisionDetectionHashing_VMV03.pdf]
but other hash functions will work. The point is to think each hit point as a sphere
with radius defined as in the progressive photon mapping paper. Since photons
are points, we can consider a photon-hit point query as a point-sphere collision
detection. Using this method, each photon just needs to do O(1) query of hit points:
simply compute a hash function and retrieve a list of hit points that actually
contains a photon position. The construction is significantly faster, too.
Hope it helps :->
Toshiya Hachisuka
(L) [2009/06/12] [graphicsMan69] [Progressive Photon Mapping] Wayback!

Hi Toshiya -
That's a good tidbit [SMILEY :)]  What is your experience with glossy materials in your PPM implementation?
  Brian
(L) [2009/06/12] [Guest] [Progressive Photon Mapping] Wayback!

>> What is your experience with glossy materials in your PPM implementation?
It certainly does work with glossy materials. Here is the visual proof  [SMILEY :)]
[IMG #1 Image]
[IMG #2 Image]
Everything is rendered using PPM (no direct illumination using shadow rays etc). I used the modified
Phong for glossy reflections. I did not measure the rendering statistics, but the images above
used the same rendering time. That being said, I have to mention that it becomes terribly slow for
highly glossy materials (i.e., nearly specular materials) because contribution of each photon
varies a lot for such a BRDF.
[IMG #1]:Not scraped: https://web.archive.org/web/20100611023641im_/http://graphics.ucsd.edu/~toshiya/lambertian.jpg
[IMG #2]:Not scraped: https://web.archive.org/web/20100611023641im_/http://graphics.ucsd.edu/~toshiya/phong.jpg
(L) [2009/06/12] [ingenious] [Progressive Photon Mapping] Wayback!

Hey, why the noise in the above pictures, particularly in the directly illuminated areas? I'd guess too small initial search radius, but it doesn't make much sense..
(L) [2009/06/12] [graphicsMan69] [Progressive Photon Mapping] Wayback!

Hey Toshiya -
Okay, I would expect very glossy materials to have problems, but that's typically true of most any GI rendering algorithm.
Seems to work pretty well for medium glossiness.
@ingenious -
I believe it's kind of "uniformly" noisy because he doesn't do direct lighting -- he said he's using PPM everywhere.
[edit]  Oops, I can see you already knew that.  nm
    Brian
(L) [2009/06/13] [Guest] [Progressive Photon Mapping] Wayback!

>> ingenious wrote:Hey, why the noise in the above pictures, particularly in the directly illuminated areas? I'd guess too small initial search radius, but it doesn't make much sense..
I do not see why it does not make much sense, but a result of PPM will have noise in directly illuminated areas as well.
If we know a scene will be direct illuminated, it would be a good idea to combine with shadow rays as oodmb did.
Of course using larger initial radius will help to get a smoother result for the same number of photons.
In any case, PPM is good for a scene illuminated by caustics from a small light source (like a light bulb).  
Finding good initial radius would be interesting future work by the way.
Toshiya Hachisuka
(L) [2009/06/13] [ingenious] [Progressive Photon Mapping] Wayback!

@Toshiya: So you're saying the high frequency noise is due to the following: At some point the photon search areas of the primary hit points get small enough so that they don't overlap. Consecutive photon tracing passes result in different (number of) photons (with possibly slightly different flux) contributing to two neighboring primary hit points, hence the noise.
I had actually missed this in the paper. Now I zoomed at the images and noticed the high frequency noise there... Anyway, I love the idea!  [SMILEY :wink:]
(L) [2009/06/13] [Guest] [Progressive Photon Mapping] Wayback!

>> ingenious wrote:Consecutive photon tracing passes result in different (number of) photons (with possibly slightly different flux) contributing to two neighboring primary hit points, hence the noise.
I see :-> That's what you meant. If I add to it, since neighboring primary hit points can have different radii,
resulting radiance values can be different, even if they happen to have the same number of photons and flux.
It would be interesting to find a way to convert such high frequency noise into low frequency noise by
correlating photon tracing/radius reduction somehow.
Toshiya Hachisuka
(L) [2009/06/13] [graphicsMan69] [Progressive Photon Mapping] Wayback!

Hi Toshiya -
How long does it typically take for this noise to go away?   For the examples in your paper did you try rendering any of them until they were converged?
  Brian
(L) [2009/06/15] [Guest] [Progressive Photon Mapping] Wayback!

>> graphicsMan69 wrote:How long does it typically take for this noise to go away?   For the examples in your paper did you try rendering any of them until they were converged?
Well, it is difficult to say 'how long' because rendering time depends on a lot of elements. For example,
the box scene in the paper looked fine with overnight (6-8 hours) rendering, but the bathroom scene took more.
In general, billions of photons seem to be required for really smooth results, so divide this number by the
number of photons/sec in your renderer. Shinji (one of the co-authors) has an implementation of PPM
in his renderer, so you might want to try it.
[LINK http://www.redqueenrenderer.com/]
I want to clarify that PPM is not aiming for being faster for "standard scenes", but being robust
for "difficult illumination settings". Replacing the area light source in Cornell box with a model of
light bulb does not change rendering time with PPM, but other methods will become significantly
inefficient especially if the size of light source is small.
(L) [2009/06/15] [ingenious] [Progressive Photon Mapping] Wayback!

Thanks for the tip, Toshiya!
Here's what I got after ~30 minutes. The caustics on the back wall appeared quite late. The image was noisy until like minute 20. Another 20 minutes after minute 30, the image looks the same.
(L) [2009/06/15] [playmesumch00ns] [Progressive Photon Mapping] Wayback!

That looks fantastic!
(L) [2009/06/15] [ingenious] [Progressive Photon Mapping] Wayback!

It really does [SMILEY :)] I let it run on a remote machine continuously. Currently it has accumulated 300M photons and the image (and caustics particularly) is essentially noise free. I'll post a screenshot when it reaches 1 billion. Expect it in some hours [SMILEY :)]
(L) [2009/06/15] [monkeyman] [Progressive Photon Mapping] Wayback!

[SMILEY :shock:]  --> [SMILEY =D>]
(L) [2009/06/15] [graphicsMan69] [Progressive Photon Mapping] Wayback!

This is a very useful thread.  Thanks guys (especially Toshiya) for posting.
(L) [2009/06/15] [toxie] [Progressive Photon Mapping] Wayback!

indeed.. very interesting to get some more behind-the-scenes details.. thanx!
(L) [2009/06/15] [ingenious] [Progressive Photon Mapping] Wayback!

As promised, here's the picture after 1 billion photons have been accumulated. Now this one is completely noise-free. Nice caustics!
Actually, the image looked almost the same with 500 million photons when I looked. The 1 billion photons have been accumulating for 8.5 hours:
I'm attaching both images to the post in case I delete the files from the server at some point.
(L) [2009/06/15] [Guest] [Progressive Photon Mapping] Wayback!

Your welcome, guys :->
By the way, when we build an acceleration data structure over hit points (kD-tree, hash, BVH or whatever you like),
it is in fact a good idea to rebuild the acceleration data structure after every several photon tracing passes.
This is because radius of each hit point will be reduced, which makes the initial acceleration data structure inefficient.
As for the frequency of reconstruction, 2^n works well in my implementation (i.e., rebuild after 1, 2, 4, 8, 16 ... passes).
# I registered to the forum. Excuse me for unnecessary guest posting so far.
(L) [2009/06/15] [thachisu] [Progressive Photon Mapping] Wayback!

>> Guest wrote:# I registered to the forum. Excuse me for unnecessary guest posting so far.
I obviously forgot to login  [SMILEY :oops:]
(L) [2009/06/15] [ingenious] [Progressive Photon Mapping] Wayback!

Welcome to the best RT forum and thanks for the insight  [SMILEY :wink:]
(L) [2009/06/16] [oodmb] [Progressive Photon Mapping] Wayback!

>> By the way, when we build an acceleration data structure over hit points (kD-tree, hash, BVH or whatever you like),
it is in fact a good idea to rebuild the acceleration data structure after every several photon tracing passes.
In my implementation, I am experimenting with using the hierarchy  over the points only and holding the max radius value so that there is no need to rebuild, and only update max values.
(L) [2009/07/21] [jufuny8138] [Progressive Photon Mapping] Wayback!

First, What a freaking good question and answers here..  [SMILEY =D>]
I'm newbie here, sorry for writing here about too simple thing.
Dropping by, I happened to know the two-pass GI renderer named 'WinOSi'..
What i'm mostly wandering is whether ppm is relevant to the above two-pass algorithm or not. hmm
I didn't look into the code but I think it might be similar except for estimation of density .. [SMILEY :roll:]
and I wonder how quickly ppm can give the noiseless (something converged) image to me compared to the naive two-pass algorithm above:?:
Maybe I should check it again with the code and come back..
(L) [2009/07/21] [thachisu] [Progressive Photon Mapping] Wayback!

>> jufuny8138 wrote:Dropping by, I happened to know the two-pass GI renderer named 'WinOSi'..
What i'm mostly wandering is whether ppm is relevant to the above two-pass algorithm or not. hmm
I didn't look into the code but I think it might be similar except for estimation of density ..  
and I wonder how quickly ppm can give the noiseless (something converged) image to me compared to the naive two-pass algorithm above:?:
Maybe I should check it again with the code and come back..
WinOSi is definitely related as I modified one of the sample scenes used in WinOSi for the paper  [SMILEY 8)]
...ok, seriously, it is related because we can think of WinOSi as using a fixed radius density estimator.
This approach is inconsistent, meaning it does not converge to the correct solution in the limit.
Progressive Photon Mapping is consistent because of its radius reduction step, and that is the key of
the algorithm. It is again hard to say how fast PPM is, so you might want to try my sample code
([LINK http://graphics.ucsd.edu/~toshiya/smallppm.cpp]) to see the example of performance.
My guesstimation is that the performance itself is better than WinOSi (i.e., it does not take days of rendering time).
(L) [2009/07/23] [jufuny8138] [Progressive Photon Mapping] Wayback!

>> thachisu wrote:WinOSi is definitely related as I modified one of the sample scenes used in WinOSi for the paper   
...ok, seriously, it is related because we can think of WinOSi as using a fixed radius density estimator.
This approach is inconsistent, meaning it does not converge to the correct solution in the limit.
Progressive Photon Mapping is consistent because of its radius reduction step, and that is the key of
the algorithm. It is again hard to say how fast PPM is, so you might want to try my sample code
([LINK http://graphics.ucsd.edu/~toshiya/smallppm.cpp]) to see the example of performance.
My guesstimation is that the performance itself is better than WinOSi (i.e., it does not take days of rendering time).
to roll up the idea..
WinOSi - fixed radius density estimator -> cannot converged to the correct solution due to no reduction step -> need unlimited job as in pure MCRT.
PPM - adaptive radius density estimator -> finally converged to the correct unbiased solution thanks to the radius reduction step ->no need of unlimited job. (what a evil.. [SMILEY :mrgreen:] )
Thanks Toshiya~ [SMILEY :D]
now I'm going to spend my time measuring the performance.. [SMILEY :mrgreen:]

back