PDF of DoF back
Board:
Home
Board index
Raytracing
General Development
(L) [2012/08/28] [ost
by shocker_0x15] [PDF of DoF] Wayback!Hi, Thank you for your kind advices on the matter of BPT MIS [SMILEY :)] (I still have fighting against the bug... [SMILEY :?] )
I'm now trying to implement Depth of Field effect.
And now I've almost finished except for considering PDF.
But I have a question with respect to PDF of DoF.
For example, in path tracing, a pixel color is computed in the following manner.
choose the position in a pixel(u, v). If it is uniformly distributed, PDF of this procedure is 1.0 (= 1.0 / (1.0 * 1.0)).
trace a ray through this position and find a next intersection point.
And if it hits a surface, trace  a new ray according to some PDF. Then ray's weight is divided by PDF to achieve Monte Carlo.
f_s (x' -> x -> x'') / p_sigma (x -> x'') * cos(N . x -> x'')
If we explicitly consider the contribution from a light to the current path, choose a point y'' randomly on the area of the light and compute the contribution as follows.
f_s (x' -> x -> y'') * G (x <-> y'') * L (y'' -> x) / p_A (y'')
If I integrate DoF (which choose a point(lu, lv) from lens surface) into these process, How is PDF of DoF considered ?
I tried this by inserting the procedure that choose a lens point between 1 and 2.
And I divide a weight of path by PDF (= 1.0 / (PI * lensRadius^2)), the result seems too dark, since this PDF has large value.
I attached two image, one is the image without DoF, the other is with DoF(is not divided by PDF).
As far as I see these images, doing nothing seems to produce correct result. Why ?
Thank you.
[IMG #1 woDoF.png]
[IMG #2 wDoF.png]
[IMG #1]:Not scraped:
/web/20200920122458im_/http://ompf2.com/download/file.php?id=84&sid=088cf29eada293002d516669d49c6f9c
[IMG #2]:Not scraped:
/web/20200920122458im_/http://ompf2.com/download/file.php?id=85&sid=088cf29eada293002d516669d49c6f9c
(L) [2012/08/28] [ost
by shocker_0x15] [PDF of DoF] Wayback!Hi, Thank you for your kind advices on the matter of BPT MIS [SMILEY :)] (I still have fighting against the bug... [SMILEY :?] )
I'm now trying to implement Depth of Field effect.
And now I've almost finished except for considering PDF.
But I have a question with respect to PDF of DoF.
For example, in path tracing, a pixel color is computed in the following manner.
choose the position in a pixel(u, v). If it is uniformly distributed, PDF of this procedure is 1.0 (= 1.0 / (1.0 * 1.0)).
trace a ray through this position and find a next intersection point.
And if it hits a surface, trace  a new ray according to some PDF. Then ray's weight is divided by PDF to achieve Monte Carlo.
f_s (x' -> x -> x'') / p_sigma (x -> x'') * cos(N . x -> x'')
If we explicitly consider the contribution from a light to the current path, choose a point y'' randomly on the area of the light and compute the contribution as follows.
f_s (x' -> x -> y'') * G (x <-> y'') * L (y'' -> x) / p_A (y'')
If I integrate DoF (which choose a point(lu, lv) from lens surface) into these process, How is PDF of DoF considered ?
I tried this by inserting the procedure that choose a lens point between 1 and 2.
And I divide a weight of path by PDF (= 1.0 / (PI * lensRadius^2)), the result seems too dark, since this PDF has large value.
I attached two image, one is the image without DoF, the other is with DoF(is not divided by PDF).
As far as I see these images, doing nothing seems to produce correct result. Why ?
Thank you.
woDoF.pngwDoF.png
(L) [2012/08/29] [ost
by apaffy] [PDF of DoF] Wayback!I find it's easiest to consider a thin lens camera model as the dual of an area light with a spot-like emission model:
* You sample the surface of the lens (i.e. the aperture in this model) for a position.  So this has an area pdf of 1/area for the area of the lens.
* For that position, you sample an outgoing direction, usually by considering a uniform position on a virtual film plate at the focus distance (i.e. a subsample within the current pixel) and converting that to a direction and angular pdf.
You should adjust your camera importance to keep the overall image brightness the same.  Just as you would reduce the shutter time as you increase the aperture size with a real camera (assuming ISO etc is kept constant), you should uniformly reduce the importance as you increase the aperture size to preserve the overall image brightness.
(L) [2012/08/29] [ost
by shocker_0x15] [PDF of DoF] Wayback!Thanks [SMILEY :D]
I drew a figure of my understanding.
As an example, I think the contribution of the path with length 3 (count from lens surface).
Does reducing the importance according to the aperture size cancel out dividing by the PDF ?
Similarly I think the reduction of the importance by the physical size of the pixel.
p_sigma (z0 -> z1) = 1 since it is uniquely determined by the chosen point zp and z0, doesn't it ?
(Strictly speaking, it is dirac distribution)
Now I'm not considering physical size of pixel. In this case, Can I think Apix as 1 ?
If this figure contains something wrong, would you point out ?
[IMG #1 DoF_Figure.png]
[IMG #1]:Not scraped:
/web/20200920122458im_/http://ompf2.com/download/file.php?id=88&sid=088cf29eada293002d516669d49c6f9c
(L) [2012/08/29] [ost
by shocker_0x15] [PDF of DoF] Wayback!Thanks [SMILEY :D]
I drew a figure of my understanding.
As an example, I think the contribution of the path with length 3 (count from lens surface).
Does reducing the importance according to the aperture size cancel out dividing by the PDF ?
Similarly I think the reduction of the importance by the physical size of the pixel.
p_sigma (z0 -> z1) = 1 since it is uniquely determined by the chosen point zp and z0, doesn't it ?
(Strictly speaking, it is dirac distribution)
Now I'm not considering physical size of pixel. In this case, Can I think Apix as 1 ?
If this figure contains something wrong, would you point out ?
DoF_Figure.png
(L) [2012/08/29] [ost
by ingenious] [PDF of DoF] Wayback!Following what apaffy's mental model, I think it'll be easier to see what's going on if you first draw a box with a pinhole, where the back plane is the image plane, and there's a single point on the front plane, through which light can enter the box. Now enlarge this single point to a finite-sized hole. Obviously, more light will pass though, and the image will receive more energy. So the first step is to reduce the sensitivity of the film, if you want to maintain the same overall image brightness. Then, having disregarded refracting lens completely, it's easier to reason about the path pdfs, as now a path starts on the image plane, goes through the hole and hits a surface. It's now also obvious that a cone of directions contributes to a single point on the image plane. Taking into account that a pixel also has finite area, here are your two degrees of freedom and the two random decisions - a point on the pixel and a direction inside the contributing cone.
Finally, if you want to see anything sharp on the image, you need to put a piece of glass on the pinhole to refract and focus some light [SMILEY :)]
Hope this helps.
(L) [2012/08/29] [ost
by shocker_0x15] [PDF of DoF] Wayback!Thank you ingenious.
p_sigma (zp -> z0) is determined by the z0, and I can get this by converting p_A (z0) as follows, can't I ?
p_sigma (zp -> z0) = p_A (z0) * | z0 - zp |^2 / cos(N_0 . z0 -> zp)
One more question.
In my posting figure,
Was this wrong ?
f_j3 = [We0(zp) / pA(zp)] * [We1(zp -> z0) / pA(z0)] * ....
Does this become correctly as follows ?
f_j3 = [We0(zp) / pA(zp)] * [We1(zp -> z0) * G(zp <-> z0) / (p_sigma_p(zp -> z0) * G(zp <-> z0))] * ....
= [We0(zp) / pA(zp)] * [We1(zp -> z0) * G(zp <-> z0) / p_A(z0)] * ....
(p_sigma_p represent a probability with respect to projected solid angle)
[IMG #1 DoF_Figure2.png]
[IMG #1]:Not scraped:
/web/20200920122458im_/http://ompf2.com/download/file.php?id=89&sid=088cf29eada293002d516669d49c6f9c
(L) [2012/08/29] [ost
by shocker_0x15] [PDF of DoF] Wayback!Thank you ingenious.
p_sigma (zp -> z0) is determined by the z0, and I can get this by converting p_A (z0) as follows, can't I ?
p_sigma (zp -> z0) = p_A (z0) * | z0 - zp |^2 / cos(N_0 . z0 -> zp)
One more question.
In my posting figure,
Was this wrong ?
f_j3 = [We0(zp) / pA(zp)] * [We1(zp -> z0) / pA(z0)] * ....
Does this become correctly as follows ?
f_j3 = [We0(zp) / pA(zp)] * [We1(zp -> z0) * G(zp <-> z0) / (p_sigma_p(zp -> z0) * G(zp <-> z0))] * ....
= [We0(zp) / pA(zp)] * [We1(zp -> z0) * G(zp <-> z0) / p_A(z0)] * ....
(p_sigma_p represent a probability with respect to projected solid angle)
DoF_Figure2.png
(L) [2012/08/30] [ost
by ingenious] [PDF of DoF] Wayback!I don't have time to check the equations, but I think it's safe to consider the both points zp and z0 as part of your path. So your full path is (zp, z0, z1, ...), and it has a well defined pdf and contribution function. And so both geometric factors should be considered - G(zp, z0) and G(z0,z1). And you can also even consider the scattering properties of the lens - it's now simply part of the scene. And the film importance function is obviously defined on zp and may also depend on the direction zp -> z0. That's a clean way to think of it, IMO.
(L) [2012/08/30] [ost
by shocker_0x15] [PDF of DoF] Wayback!Thank you for your advice [SMILEY :)]
In Veach's thesis, the importance function is considered on the point z0 and the direction z0 -> z1, but not on zp, zp -> z0.
Is the reason of this that the point zp and the direction zp -> z0 are uniquely determined by the points z0, z1 ?
In this case, Has G(zp <-> z0) term been implicitly included in the importance function ?
---
Let me assume that I want to maintain the image brightness, and a sensor measures the brightness of the image in the amount of energy which falls into the sensor.
If the angle of view is fixed, The sensor size gets bigger as the distance to the lens gets longer.
G(zp <-> z0) term decreases with the squared distance between z0 and zp, but p_A(z0) also varies with the sensor size (which is proportional to squared distance).
Generally speaking, Isn't the importance function reduced for a sensor size, since the total falling energy into the sensor doesn't vary with the size ?
I am sorry to be poor at explanation.  [SMILEY :(]
(L) [2012/08/31] [ost
by ingenious] [PDF of DoF] Wayback!Does Veach actually discuss lens camera models? I think he simply considers the first point of the eye path to be on the camera and that's it. Whichever vertex is on the film, the importance function should be defined there. Well, maybe you can also incorporate some lens effects as well. Why not just rename zp to z0, z0 to z1, etc.? It's just a path, there's nothing special about this lens. There's only the optimization that, since you know that light can only enter the camera through the lens, you importance sample the directions from points on the film to go through the lens.
(L) [2012/08/31] [ost
by shocker_0x15] [PDF of DoF] Wayback!Indeed, the lens is the scene object too [SMILEY :)]
That is to say, I can treat the lens as one of the scene object, and in such case I should consider G(zp <-> z0) term for completely physically-based rendering.
But in Veach's thesis, he simply simplified the zp-z0 section.
Thank you for your kindness.
I'll back for debugging MIS BPT and studying MLT  [SMILEY 8-)] [SMILEY :cry:]
back