Re: pbrt-v3 source code now available back

Board: Home Board index Raytracing General Development

(L) [2015/07/08] [ost by Dietger] [Re: pbrt-v3 source code now available] Wayback!

>> ypoissant wrote:On page 214, I don't understand the sentence "When working with these error intervals, it’s important to remember that because
(1 ± em) terms represent intervals, canceling them incorrect:"

Seems fine to me. Although the numerator and denominator contain the same 'term', they don't cancel because the terms are actually intervals: (1 ± em) is the interval [(1 - em),(1 + em)], so (1 ± em)/(1 ± em) = [(1 - em)/(1 + em), (1 + em)/(1 - em)] (with 0 < em < 1).
(L) [2015/07/08] [ost by ypoissant] [Re: pbrt-v3 source code now available] Wayback!

>> Dietger wrote:ypoissant wrote:On page 214, I don't understand the sentence "When working with these error intervals, it’s important to remember that because
(1 ± em) terms represent intervals, canceling them incorrect:"

Seems fine to me. Although the numerator and denominator contain the same 'term', they don't cancel because the terms are actually intervals: (1 ± em) is the interval [(1 - em),(1 + em)], so (1 ± em)/(1 ± em) = [(1 - em)/(1 + em), (1 + em)/(1 - em)] (with 0 < em < 1).
I understand the mathematical implication. I thought the expression "canceling them incorrect" felt strange. But maybe that is because I'm not a native English speaker. Anyway, I underlined the expression in my previous post to make it clearer.
(L) [2015/07/16] [ost by mattpharr] [Re: pbrt-v3 source code now available] Wayback!

>> dr_eck wrote:I took a quick run through the comments in the BDPT integrator but couldn't determine if you have implemented Georgiev's VCM or Hachisuka's near-equivalent.  I'm under the impression that these are really important contributions.  Are they included?  Will they be?

How about Markov Chain Monte Carlo?

Will you discuss Kelemen-style MLT versus Veach-style?  (Or did I miss that because I'm still at v1.0 of your book?)

How about Adaptive Progressive Photon Mapping?  (I only see SPPM in the code comments.)

PBRT is a great book.  I'd love to have a copy that include all of the best available algorithms.
The second edition had Kelemen-style MLT; the third will have Hachisuka's improved variant (now running on top of BDPT with MIS.)

Unfortunately we don't have VCM stuff or APPM in there. Basically, we can't add any more pages to the book, plus those algorithms are particularly tricky to implement. So the return on pages consumed didn't make sense--we'd have to have cut too much other content to include them.

There's always the 4th edition. [SMILEY :-)]

Thanks,
Matt
(L) [2015/07/16] [ost by mattpharr] [Re: pbrt-v3 source code now available] Wayback!

>> papaboo wrote:I'm with Koiava on this. Generally I find it much easier to return both sample and pdf as a single entity. It just makes it clear that you need to take the pdf into account.
The only exception is when we can sample something uniformly and the pdf is the same for all samples. In some of those cases we can just average by the number of samples taken and pdf be damned. However, even in those cases I would still implement both sample methods, just to make it clear that there is still a pdf involved.
It's an interesting question. And, for Lights and BSDFs, we always do return the value of the pdf with the sample, so it's not fully consistent.  (Though with those, there'd be a bunch of unnecessary redundant computation when Pdf() was called right after the sampling method, which isn't true with shapes, where the Pdfs are simpler.)

Anyway, good point--we'll think about it!

Thanks,
Matt
(L) [2015/07/16] [ost by mattpharr] [Re: pbrt-v3 source code now available] Wayback!

>> ypoissant wrote:On page 214, I don't understand the sentence "When working with these error intervals, it’s important to remember that because
(1 ± em) terms represent intervals, canceling them incorrect:"

On page 219 "then it’s impossible to tell whether it is exactly zero or whether a small positive negative value has rounded to zero."

Page 229 "guaranteed to be on the right side of the surface so that they ray doesn’t incorrectly intersect the surface it’s leaving." and "a second source of rounding error most also be addressed"
Nice catches--fixed (in the book source, if not yet the posted PDF)

Thanks!

Matt
(L) [2015/07/20] [ost by MohamedSakr] [Re: pbrt-v3 source code now available] Wayback!

I think at these lines:
[LINK https://github.com/mmp/pbrt-v3/blob/6b6c30f1e2bdb0052aed0136638d7c6106de779d/src/core/film.h#L130-L152]
for loop should be using <=

so if we are at pixel_x = 11, filter radius = 0.5
xmin = 11 - 0.5 - 0.5 = 10
xmax = 11 - 0.5 + 0.5 + 1 = 12
loop from 10 to 12 (int i = xmin; i <= xmax; ++i)
(L) [2015/07/21] [ost by mrluzeiro] [Re: pbrt-v3 source code now available] Wayback!

Hi Matt,

[1] I am not a C++ expert, but isn't this line here redundant?
inline uint32_t LeftShift3(uint32_t x);
[LINK https://github.com/mmp/pbrt-v3/blob/master/src/accelerators/bvh.cpp#L81]

EDIT:
[2] The depth in flattenBVHTree(BVHBuildNode *node, int *offset, int depth)
[LINK https://github.com/mmp/pbrt-v3/blob/master/src/accelerators/bvh.cpp#L614]
looks like there is no propose / use of that variable.

[3] At [LINK https://github.com/mmp/pbrt-v3/blob/master/src/accelerators/bvh.cpp#L273]
I was confused in that switch / fall break, then I notice that in PBRT source code from book 2, you have a comment that helps understand the fall and make the things more clear.

EDIT2: I added this issues into the github system.
(L) [2015/07/22] [ost by dr_eck] [Re: pbrt-v3 source code now available] Wayback!

>> mattpharr wrote:dr_eck wrote:I took a quick run through the comments in the BDPT integrator but couldn't determine if you have implemented Georgiev's VCM or Hachisuka's near-equivalent.  I'm under the impression that these are really important contributions.  Are they included?  Will they be?

How about Markov Chain Monte Carlo?

Will you discuss Kelemen-style MLT versus Veach-style?  (Or did I miss that because I'm still at v1.0 of your book?)

How about Adaptive Progressive Photon Mapping?  (I only see SPPM in the code comments.)

PBRT is a great book.  I'd love to have a copy that include all of the best available algorithms.
The second edition had Kelemen-style MLT; the third will have Hachisuka's improved variant (now running on top of BDPT with MIS.)

Unfortunately we don't have VCM stuff or APPM in there. Basically, we can't add any more pages to the book, plus those algorithms are particularly tricky to implement. So the return on pages consumed didn't make sense--we'd have to have cut too much other content to include them.

There's always the 4th edition.

Thanks,
Matt
I'm not sure how the 4th edition would solve the page count problem.  Perhaps a second volume that addresses "Advanced PBRT Techniques" is called for.  It could be the basis for a second semester course.
(L) [2015/07/23] [ost by papaboo] [Re: pbrt-v3 source code now available] Wayback!

You could always implement them as very well documented integrators and add them to the source code, then they wouldn't even have to be ready for the release. For VCM/UPS though there is already three papers and SmallVCM. If you've read and understood PBRT, then that should be enough to get anyone started on their own VCM implementation.
(L) [2015/07/27] [ost by andersll] [Re: pbrt-v3 source code now available] Wayback!

In the BeckmannSample11 function in microfacet.cpp you have special logic to check for whether we're at normal incidence and sample the slope directly, but then the rest of the function continues as normal and *slope_x and *slope_y will be overwritten. Shouldn't the function return after line 48?

[LINK https://github.com/mmp/pbrt-v3/blob/6b6c30f1e2bdb0052aed0136638d7c6106de779d/src/core/microfacet.cpp#L43]
(L) [2015/08/08] [cozdas] [Re: pbrt-v3 source code now available] Wayback!

Great news! Thanks for this book Matt, it's a real gem.
I've got one request though: It'd be great if you can release the book in "print replica" format. The format of the existing (v2) edition generates hard-to-follow page layouts with disproportional elements in the page (sample page attached to demonstrate my point). Realtime Rendering is a good example of the "print replica" format. With pbrt's larger pages and smaller texts it might be tricky to do but I'd appreciate if you at least consider that.
(L) [2015/08/14] [ziu] [Re: pbrt-v3 source code now available] Wayback!

Thanks a lot for the code Matt! I have been using the CPU HLBVH builder code and been throwing all test scenes at it and so far it seems to be holding ok. I only had an issue with scenes where I had infinite size primitives (e.g. half-spaces in an extension I added) so I ended up removing those prior to construction.
(L) [2015/08/25] [toxie] [Re: pbrt-v3 source code now available] Wayback!

In [LINK https://github.com/mmp/pbrt-v3/blob/master/src/core/pbrt.h#L235] and [LINK https://github.com/mmp/pbrt-v3/blob/master/src/core/pbrt.h#L249]
the checks for +0.0 and -0.0 are not correct, as by IEEE754 +0.0 == -0.0 is true. So you want to check this special case with the uint you're getting below instead.
EDIT: oh, forget about it, the code of course still will do what is actually needed. time to stop staring at code now, i guess..  [SMILEY ;)]
(L) [2015/09/02] [ziu] [Re: pbrt-v3 source code now available] Wayback!

I did a change to the code. I had issues with some scenes with abnormal bounding boxes (too small) blowing up the program by triggering this assert :
Code: [LINK # Select all]Assert(centroidBounds.pMax[dim] != centroidBounds.pMin[dim]);
[LINK https://github.com/mmp/pbrt-v3/blob/master/src/accelerators/bvh.cpp#L527]
So I changed the code in BVHAccel::buildUpperSAH to be like this:
Code: [LINK # Select all]int mid;
if (centroidBounds.pMax[dim] != centroidBounds.pMin[dim]) {
    // do the fancy split position finding method.
    mid = pmid - treelet_roots;
} else {
    mid = start + (end - start)/2;
}

I know its a bit of a hack, but at least it can still find a split point somewhere and generate a viable BVH.
Hope it helps,

back