Re: understanding sampling, filters, motion blur, DoF,... back

Board: Home Board index Raytracing General Development

(L) [2014/07/01] [tby friedlinguini] [Re: understanding sampling, filters, motion blur, DoF,...] Wayback!

>> tarlack wrote:@friedlinguini : don't you have structured aliasing patterns with your approach ? I remember blender doing this, and the number of images to avoid ghosting effects were large as soon as an object was going really fast relatively to the aperture time. For instance, reproducing a long exposure time photo (typically with these nice and so appealing curved headlights trails in an urban setting) requires a continuous sampling (or a prohibitively large number of images), maybe with some time-domain filtering.
What I'm suggesting assumes that motion blur is being done properly within a single frame, but doesn't address how. I had envisioned monte carlo sampling in the time domain, along with all the other sampling dimensions.
 >> For motion-blur and raytracing, it seems to me that having two bounding boxes and interpolating between them does not allow to correctly take into account non linear transformations, such as rotors, a drifting car, or even the path of an object with very long exposure times. Maybe a more accurate way would be to have a single BVH in 4D ? Or, for an more intuitive point of view, a standard 3D BVH, where the bounding box of each object is simply the union of the bounding box of the object at all times during the aperture time. Then, when you hit a bbox, you compute the exact intersection for your ray's t value, for which you can use an acceleration structure based on time specific to the object to accelerate this computation. This way I think it should be possible to handle any non-linear motion blur.
Sounds doable, though it makes the bounds less tight to the geometry. Taking the union of bounding boxes across a complex nonlinear motion path does not sound like fun, though. I seem to recall that RenderMan uses piecewise linear motion with a default of one segment per frame, overrideable on a per-object basis. Such an approach could be used to preserve the tight bounds and easy computation of cessen's suggestion (maybe restricting to 2^N segments per object to avoid blowing up the number of segments for higher-level nodes).
Alternatively, average together a number of sub-frames, stratifying across the shutter time and using linear motion within each sub-frame.
(L) [2014/07/01] [tby cessen] [Re: understanding sampling, filters, motion blur, DoF,...] Wayback!

>> tarlack wrote:For motion-blur and raytracing, it seems to me that having two bounding boxes and interpolating between them does not allow to correctly take into account non linear transformations, such as rotors, a drifting car, or even the path of an object with very long exposure times.
I suggested just having two bounding boxes just as a first implementation, to get experience with the technique.  For a production-ready implementation you want to allow for arbitrarily many bounding boxes, so that you can approximate curved motion with many linear segments.  See e.g. [LINK https://www.youtube.com/watch?v=rydLFAdhseo] at about 11 seconds.  The spinning rectangles in the back are actually done with deformation motion blur (I hadn't implemented transform motion blur when I rendered it), using 32 linear segments.
 >> tarlack wrote:Maybe a more accurate way would be to have a single BVH in 4D ?
I'm not entirely sure what that would look like...?  In general, geometry exists for the entire duration of the frame, so bounding it in time isn't terribly useful.  Unless you mean having multiple bounds representing different time segments.  But then you're basically back to just doing what I described except with piece-wise constant interpolation instead of piece-wise linear.
 >> tarlack wrote:Or, for an more intuitive point of view, a standard 3D BVH, where the bounding box of each object is simply the union of the bounding box of the object at all times during the aperture time.
As friedlinguini mentioned, your bounds can become pretty loose then.  For example, imagine a high-resolution character mesh that's running.  The distance of the motion far outstrips the size of the polygons, and you end up with an extremely poor quality BVH with lots of large and substantially overlapping bounds.
Or, taking your long exposure time photo example, each car in traffic would end up with very large bounds, all of which mostly overlap with each other.
 >> friedlinguini wrote:I seem to recall that RenderMan uses piecewise linear motion with a default of one segment per frame, overrideable on a per-object basis. Such an approach could be used to preserve the tight bounds and easy computation of cessen's suggestion (maybe restricting to 2^N segments per object to avoid blowing up the number of segments for higher-level nodes).
Yeah, in Psychopath I do piece-wise linear for everything, and restrict the number of geometry/transform segments to be a power of two so that they're easy to merge in the parent BVH nodes.  It works really well, at least with the basic scenes I've tested so far.
Something I haven't investigated yet, but intend to at some point, is better ways of building the BVH tree.  Right now I just take the objects at time 0.5 and build the tree's structure based on that.  But that doesn't account for e.g. fast moving objects in an otherwise static scene.  Just building the tree based on time 0.5 will likely group that fast-moving object along with fine static geometry, which causes major efficiency problems for rays at times other than 0.5.  IIRC this is something that an MSBVH can help alleviate, but I'm guessing there are strategies even for vanilla BVH's that could improve things, and I think it would be fun to play around with that. [SMILEY :-)]
(L) [2014/07/01] [tby cessen] [Re: understanding sampling, filters, motion blur, DoF,...] Wayback!

>> MohamedSakr wrote:1- big question about BVH to understand it more, what is the structure of the BVH, I thought it is just some bounding boxes which contains a set of triangles, and I won't lie I've seen the term "node" and I didn't understand it that time   , so I may need a clarification of the basic structure of a BVH
I strongly recommend buying a copy of "[LINK http://www.pbrt.org/ Physically Based Rendering: From Theory to Implementation]" by Matt Pharr and Greg Humphreys.  [LINK http://www.pbrt.org/chapters/pbrt-2ed-chap4.pdf Chapter 4] (sample chapter on their site) goes over the basics of several different kinds of acceleration structures, including BVHs.  But the whole book is amazing, and is a great reference.  I refer to it constantly when working on Psychopath.  It answers a lot of the questions in your OP, in great detail.
(L) [2014/07/02] [tby tarlack] [Re: understanding sampling, filters, motion blur, DoF,...] Wayback!

Well, the 4D BVH is just like a 3D BVH, but splitting according to time is also allowed => no more loose bboxes. The 3D bbox + 1D time split is just a special and largely suboptimal case of a 4D BVH. The ray/geometry intersection is 4D as well, but finding an accurate intersection with an arbitrarily animated geometry will be a hard part. However, it allows to naturally manage non rigid/non linear/skinned animated geometry, such as flash running to save the world or superman flying to save Lois [SMILEY :mrgreen:] . About the complexity of computing the bounds : I agree in the general case, this surely is a major difficulty.
(L) [2014/09/06] [tby MohamedSakr] [Re: understanding sampling, filters, motion blur, DoF,...] Wayback!

a may be related topic, how to calculate the illumination taking distance into account "so I want to calculate the (Decay) for path (Throughput) depending on distance squared"
what I have is Throughput and Distance
newThroughput = f(OldThroughput, Distance);
what is the function?

back