(L) [2007/09/23] [coldcity] [Point-sampling geometry] Wayback!Hi all,
I've spent some time this weekend playing with point-sampled geometry as detailed in [LINK http://groups.csail.mit.edu/graphics/pubs/egwr2000_raytracing.pdf Ray Tracing Point Sampled Geometry, Schaufler and Jensen, 2000.]
Disclaimer: I'm getting a few visual artifacts that look to be due to numerical instability, so I'm not posting pictures at the moment (the paper has pretties anyway).
Rather than trying to get realtime raytracing I generally go for minimum time-to-image of a single frame and I'm blown away by the performance I've been getting in my naive monotracer. A regular grid-based implementation is outperforming my grid-based and kd-tree based raytracers with otherwise similar internals by an order of magnitude on both subdivision computation and rendering. I'm not sure why the authors couldn't get the technique faster than their other renderer but there we are; I haven't yet played with sample scenes beyond 3D-scanned-object-in-close-up so I have a lot more testing to do.
I'm surprised I haven't heard more of this technique so I thought I'd outline it and ask if anyone here has played with it.
The basic gist of my interpretation and implementation is this:
 - There's just a single primitive - a point with an associated normal vector. This represents a sampling point on a surface.
 - Intersection testing is done as though the point were a disc perpendicular to the point's normal. The radius of the disc must half as large as the largest gap between any two sample points on the mesh, so that the discs are just large enough to cover the model (optimisation suggested by the paper is to vary the disc radius with local point density).
 - When the closest intersection is found, a second ray is cast a small distance into the mesh from just in front of the intersection point.
 - This second ray is to quickly find the intersected point's neighbouring sample points. The ray seeks to intersect with all the discs of all points neighbouring the most closely intersected disc. The normals of the neighbouring points represented by intersected discs are pushed onto a stack along with the neighbouring point's distance from the intersection point.
 - All normals in the stack are summed with a weight factor according to the neighbour's distance from the intersection point. The resultant normal is then normalised. In the paper a more correct intersection point is itself interpolated at this point but I'm not currently bothering.
The technique is still somewhat valid even without the second ray, with the caveat that the mesh appears faceted by the discs. These have a Voronoi feel to the visual breakup and are less disturbing than triangles.
The prime advantage, and the reason even I've got it running quickly, is that the intersection routines (ray-disc and point-AABB) are so much cheaper than my ray-triangle and triangle-AABB tests.
The construction of advanced partitioning schemes becomes super easy too when you don't need to consider split triangles etc, so it makes a great testbed for playing with kd-tree, BIH, etc., with opportunities for new splitting heuristics based on point density for instance - surely some paper opportunities in there?
Another bonus is that a given mesh's vertex count can typically be half that of it's triangle count! A mesh represented by a point cloud with associated normals is cheaper and it's just as easy to interpolate normals and arbitrary attributes between such points as it is along a triangle's vertices.
There's also an elegance with points that I hadn't considered to be lacking with triangles until I started playing with this. Why, when attempting to render a representation of any of the standard test objects such as the bunny or buddha, should we as ray tracers use something as arbitrary as a triangle? The point seems a more direct solution. The real-life bunny can't be said to have triangles. It can definately be said to have boundary coordinates in 3D-space though. Using points seems like skipping an uneccessary layer of abstraction.
It seems like points should be the way of the future; seeing as they don't seem to be touted as such, and you're all trying to write blazingly fast triangle renderers, is there something I'm missing here?
(L) [2007/09/23] [fpsunflower] [Point-sampling geometry] Wayback!This technique is totally valid for dense models with lots of detail. For flat surfaces or things with sharp edges obviously you're going to loose some performance (and memory) over just plain big triangles. Surely you wouldn't want to render Sponza like this.
I played with this a while back and it did seem to be a very promising direction with lots of potential for optimization. There's some really cool LOD things you can do with this kind of representation (ala Q-splat and another papers).
The problem for me does come down to the source of data. You can probably render the bunny and all the other stanford models like this, you can also probably write a nice sub-d+displacement thing that outputs only points and handle all the mudbox/z-brush kind of models. But there's still a large class of models where converting to this point-sampled representation is going to hurt you. I also found that from a quality standpoint, getting the point radius right was fairly important if the density of points is not uniform. And if you require a uniform point density - then you would have to maintain it for animated models, which could be expensive (there's some papers about this which I haven't read).
So my general conclusion was that this is great for visualizing scanned static data, but thats the easiest case anyway. As a general purpose solution, I don't think its the right surface representation. But there's certainly a large community advocating point based graphics in general so who knows if something really convincing might be shown in the future.
I'd love to see renders from your implementation. Especially in areas like the ears of the bunny where I was having trouble because of the high(er) curvature and not quite uniform density.
(L) [2007/09/24] [ingenious] [Point-sampling geometry] Wayback!Totally agree with fpsunflower. I also had nice fps experiences with point-based ray tracing, but currently points are good only in representing very detailed scanned geometry. It is bad for architectural scenes, because of the non-uniformity of detail and because point clouds are just clouds - they don't carry any topological information, which you need especially for representing singularities in the original geometry.
Of course, ray tracing of points can be elegant and and easy. And building kd-trees for points is a piece of cake.
And the obvious observation: when far from the object and much detail projects onto a single pixel, points are better; when you are close to the object, triangles are better. So just combine both. But nobody has done it efficiently so far...