(L) [2007/10/14] [lycium] [New Articles on RTRT @the Inquirer + Intel] Wayback!oh god, their forum has completely messed up my text and i can't edit posts [SMILEY Sad] in case anyone's interested, i'm cross-posting my "review" of the article here.
----
Some things came to mind while reading this article, and instead of just inappropriately dismissing it I'm going to post some of these thoughts here. I am definitely a big fan of ray tracing (maybe some people will remember my article on RTRT from years ago), but I also strive to be objective.
This is because many people hold up ray tracing as the solution to realistic lighting and as such, they view current rasterisation renderers as used in GPUs and consoles today as inherently inferior.
It is a fact that rasterisation is a technically inferior algorithm in many respects, the most obvious of which have already been mentioned: reflection/refraction, shadowing. These are are impossible to do robustly and without approximations using rasterisation.
Even perspective shadow maps have corner cases, stencil buffering has depth complexity limits, and both are slow; ray tracing provides robust, efficient solutions to both, elegantly. However, even the most biased ray tracing proponent will have to admit that rasterisation has its uses - one of which is the "low cost of entry" wrt processing power. This is why we're using it now (and whoever simply says rasterisation sucks needs to shut up and play Bioshock!), but in the pursuit of greater realism we may just have to change algorithms. As an analogy, an insertion sort is definitely the right algorithm to use in the simplified case where your list is already nearly sorted, however under less constrained circumstances you should use a more suitable algorithm.
Aliasing: yes, this is a downside to using ray tracing, objects can be missed by rays, and supersampling only pushes the problem further out. However, to say that the cost of supersampling is exponential (!!) is of course seriously incorrect.
Static vs dynamic scenes: this is where I most disagree with the article, and is also probably the most important point to consider when discussing real time ray tracing of complex scenes. Recently there have been HUGE successes in accelerating the construction of spatial subdivision structures; one of the most important advances here is the Bounding Interval Hierarchy, as presented by Wächter and Keller. You can build these in the blink of an eye, and the performance is incredibly close to that of a well-implemented k-D tree! What's more is that you can a priori determine your memory usage, and since it's so ridiculously fast to build you can do this lazily... if I can stick my neck out a little with a guess, given dedicated hardware you can just feed it a triangle soup as you might with rasterisation, and it'll be completely transparent to the application.
So already we can build these subdivision structures extremely quickly, and only when needed. As if that wasn't enough, there's a really innovative way to cull triangles from a ray packet described by Reshetov (see [LINK http://www.sci.utah.edu/~wald/RT07/vertex_culling.pdf]), which means your trees can be substantially more shallow - lower build latency, less memory, smaller caches.
These recent improvements have really made a dramatic difference in the applicability of ray tracing to dynamic applications, and is one of the reasons why it's getting so much attention recently - it's not just hype, there's big things happening here.
Global illumination: I think this is a bit out of place when discussing realtime ray tracing. Even sophisticated implementations of Metropolis light transport will struggle in certain difficult situations, like viewing caustics via specular reflection. Since MLT is the most robust rendering method we know (photon mapping coming a close second, often needing Metropolis sampling in difficult settings), one can hardly expect lesser monte carlo methods to do the job reliably in any conceivable situation. Efficiently handling *any* mode of light transport path is a massive big fat unsolved problem in rendering, and until that's solved for offline rendering we shouldn't expect to see it in realtime! Maybe in 10-15 years. For now, just doing constrained solutions (like radiosity) in realtime would be awesome, and Metropolis-sampled instant radiosity as described by Segovia (see [LINK http://bat710.univ-lyon1.fr/~bsegovia/papers/mir.html]) looks really promising.
These proven results suggest that traditional Whitted ray tracing has relatively low lighting and image quality, and requires largely static scenes compared to what we are used to already.
Excuse me, where was that proof? Also, I've yet to encounter a game where there are complex worlds you can smash up at will... why? Because then you'd have to rebuild your BSP tree and that's slow? Hmm, sounds like a familiar problem eh... [SMILEY Razz]
Hopefully the game-changing nature of those rapidly constructed acceleration structures is now sinking in. BIH isn't the only game in town, either: pure BVHs are making a big comeback too. Ray tracing complex, dynamic scenes is a reality that more people should be aware of, and it's a pity this article missed that opportunity.
Not exactly the quantum leap some would like you to believe!
Journalistic point: a quantum leap is exactly the opposite of what people who use it in this context are intending to say!
More to the point: no one is saying "ZOMG ray tracing rules rasterisation sucks let's all switch now!", except maybe that recent Intel blog *grumble*. Even people Inside Intel have said that it was uninformed. What this is all about is The Holy Grail, and it will never be achieved with rasterisation. That much should be obvious, right? It's just a matter of time.
Almost every solution to a problem with ray tracing involves shooting exponentially more rays
Again with the "exponential"... please, can we be a little less dramatic/biased and more realistic? In renderers which seek to provide highly realistic renders you will never see an explosion of ray count with depth, because it's carefully handled by probabilistically following a single path; in faster realtime renderers you could well see a branching factor of two for rendering glass (Fresnel-weighted reflection and refraction), but they will typically cap the reflection depth.
Let's just step back a little and ask what all this is for: precisely computed high-order reflection and refraction! Before complaining about the moderate cost of achieving this - it is minimal, unless someone has an idea where we're all wasting computational resources - maybe we should *ahem* reflect on the fact that it's completely impossible to do with rasterisation and, lacking a point of comparison, focus more on what we're getting for how much extra computation?
The unconsidered bias against ray tracing in the quoted sentence should be clear by now. Yes Sherlock, it doesn't come for free - got any better ways to do it?
However, the logic that insists ray tracing looks better than rasterisation goes way back to the 1970s, when rasterisation consisted of a few Gouraud or Phong shaded triangles while ray tracing was adding shadows, reflection and refraction. Things have changed a lot since then.
How ironic... ray tracing algorithms have also changed a lot just in the last two years, yet that is somehow exempt from being compared against the latest efforts that hardware-accelerated rasterisation can offer?
GPUs today access textures at high speeds, and shader hardware allows textures to be used as global scene data. This approach to global illumination started with shadow maps, a light to surface occlusion term packed in a real-time calculated texture.
1. Yes, isn't it nice having dedicated hardware to do texturing and shading? I tell you, it's not easy to compete against hardware backed with billions of research dollars over the last decade using a general purpose processor whose architecture dates back to the 386 (and has to do everything else too). A nice apples-to-apples comparison for sure [SMILEY Rolling Eyes]
2. Rendering baked light maps isn't "an approach to global illumination", it's not even doing local illumination!
I stress the word baked, lightmaps were never calculated in realtime, that's shadow maps. Shadow maps are certainly not an approach to global illumination, and you're probably aware that these lightmaps weren't computed with rasterisation, but with ray tracing. All the Quake/Unreal games and essentially all terrain lighting/shadowing, is done via ray tracing.
This approach allows most global illumination models to be used with a rasterisation-only renderer; its simply a matter of figuring out how to capture the data in a texture.
This is, sorry, pure nonsense. On so many levels.
First of all, there is only one global illumination "model". It's not even a model, it's the solution to the rendering equation. This, compounded with the "shadowmaps == global illumination approach" statement earlier, shows that someone doesn't really understand global illumination.
Next up, if global illumination could be efficiently computed using rasterisation, don't you think people would be doing that, instead of spending hours rendering with Maxwell, Indigo, Mental Ray, Fryrender, Vray, etc.? Of course they would, in a heartbeat! To some extent rasterisation can help, but there's always corners being cut (e.g. a popular approach is photon mapping on the CPU - the real computational core - and then doing the final gather on the GPU).
Finally, global illumination is NOT just a matter of figuring out how to capture the data in a texture!! How patently absurd! How should this texture data be computed, drawing a bunch of triangles? Ray tracing in shader (ray tracing is not especially well-suited to GPUs)? Supposing you do the ray tracing in a shader, how is that rasterisation, and not ray tracing?
In any case, global illumination isn't what you store in a texture and then later display, it's the method you use to compute the data in the first place. So if you write a renderer that uses precomputed global illumination (via ray tracing, obviously) lighting via spherical harmonics or wavelets, that's just displaying the results of a global illumination simulation. If I watch an anime on my computer, it's not actually drawing the anime on its own, it's just decompressing video; same principle.
Anyway, I've written WAY too much already so I just want to conclude with one (rhetorical, as I really must study for my exams) question:
Why all the negative bias against a rendering algorithm that has proven to be so helpful, and that will inevitably be so much more useful in the future?
(L) [2007/10/18] [bouliiii] [New Articles on RTRT @the Inquirer + Intel] Wayback!Just about Metropolis Instant Radiosity, it will be just much much faster with a GPU and interleaved patterns ... And you can use a cache for the shadow maps (with much simpler techniques to handle flickering issues than the technique presented by Nvidia this year at the symposium on the rendering). So, Dirtier but much faster. Today, I am almost sure that ray tracing won't win before many many years. In one or two years, many games will have geometry, subdivision surface, bezier patches and so on .... tesselated on the fly and I cannot see how "coherent ray tracing" (and its worse perversions using frustums --> why don't use directly a rasterizer ??!?) (which is not really ray tracing) with all its limitations can compete with that. Adding one or two mirrors will not provide enough extra quality and people will certainly prefer perfectly fine and displaced geometry than this.
If we want to achieve photorealistic rendering (and handle these f**** flickering issues), we are really far from interactive rendering.
As for Larabee, even with 64 cores, I don't believe it will provide enough power to make a competitive solution for ray tracing but it will be certainly an outstanding co-processor.
PS: and we have to remember that with a geforce7 and a fine tesselation system, we can easily obtain the perf peak with more that 500millions triangles / sec.... and with office, you can easily get 1000 f/s....
PSPS: and ray tracing does not easily handle huge scenes --> how can you deform a huge model? The classical system:
1/ acquisition
2/ tesselation
3/ decimation --> output coarse mesh, normal map, displacement map
4/ On the fly tesselation and Rasterization
With skinning or deformations on the coarse mesh and so on....
This is efficient, cache-coherent and everything you want. Making it in an interactive ray tracing system (with a kind of triangle cache to replace the on-the-fly tesselation) is certainly a pain in the ass. Directly using the huge mesh also seems untractable.
So, if we want that ray tracing  beats rasterization, we have to propose stg else than mirrors, hard shadows, and 10^20 * log(n) ray traversal (and O(n) builds ....). I don't know if Intel guys work on an *off-line* ray tracing system to generate perfect pictures in a fast manner, but it may be a better choice than trying to beat rasterization in the field of real time rendering.