New Articles on RTRT @the Inquirer + Intel back

(L) [2007/10/11] [Shadow007] [New Articles on RTRT @the Inquirer + Intel] Wayback!

There's a new article @the Inquirer regarding Real Time Ray Tracing ...

[LINK http://www.theinquirer.net/gb/inquirer/news/2007/10/11/intel-man-touts-real-ray Intel man touts real-time ray-tracing]

Regarding the following blog:

[LINK http://blogs.intel.com/research/2007/10/real_time_raytracing_the_end_o.html Real Time Ray-Tracing: The End of Rasterization?]
(L) [2007/10/11] [davepermen] [New Articles on RTRT @the Inquirer + Intel] Wayback!

what i really like about raytracing is, that we can go down low to the rasterisation tricks (cubemaps, artefacted shadowmaps, etc..) to gain performance, and fake stuff where quality is not really needed. similar tricks for global illumination exist (scaled down sampling and such). _but_ with raytracing we _can_ scale up to the perfect image. something rastericing doesn't allow.


this is why i love it. rastericing starts at the pic we see in the blog, and go up to a nice faked image.. raytracing can start there, and go up to the real thing.. so it is, sort of, the logical future step.
(L) [2007/10/11] [Phantom] [New Articles on RTRT @the Inquirer + Intel] Wayback!

I have a 'google alert' setup for me that notifies me of any new hits on the search term 'real time ray tracing'. I use this for over a year now. Over the course of that year, I got two notifications. Recently, this rate increased quite a bit, and every new hit is linked to Intel. Seems to me that Intel is readying the community for real-time ray tracing...
_________________
--------------------------------------------------------------

Whatever
(L) [2007/10/11] [davepermen] [New Articles on RTRT @the Inquirer + Intel] Wayback!

yep, they're really pushing it. it doesn't matter if we're actually there, or not (think about shading.. how long till gpu's really got some nice shading hardware (sm3.0) and till it got it fast enough, compared to ps1.0 hype.. those where the days..


but the pushing is there, that's great.
(L) [2007/10/11] [Michael77] [New Articles on RTRT @the Inquirer + Intel] Wayback!

In my opinion tracing rays is by far not everything. I sadly haven´t seen the new intel tracer yet by I doubt it is capable of doing shading as GPUs currently can, not to mention the high quality raytracing stuff (ray differentials alone will probably hurt performance quite a bit - at least they do it for me but maybe I am doing something wrong ). And that´s the problem at the moment: Everybody shows that realtime raytracing is possible on current hardware, but the images are much worse than current state of the art games (ok, I guess part of this is because there are no people actually doing content creation for RTRT). It looks like most reaserch time on RTRT is spend on tracing rays and how to do it fast. But there is nearly no paper on shading for realtime raytracing, although this is the part that makes things interesting. And when you have a look at offline raytracers: Most of the time is spend in doing the shading, tracing rays makes up about 25% or so.


So we are currently at the point where we can say: "ok, we finally know how to trace (primary) rays fast and how to dynamically build our acceleration structures. Now let´s make it look good!"


Just my 2 cents
(L) [2007/10/11] [Shadow007] [New Articles on RTRT @the Inquirer + Intel] Wayback!

Yes, but we're comparing "custom" processors with generic ones ...

If Shading were done on specialized chip parts (or even if the entire RayTracing were done on specialized cores), the performance could be well on the RTRT side ...
(L) [2007/10/11] [toxie] [New Articles on RTRT @the Inquirer + Intel] Wayback!

these two articles are maybe the most braindead texts i've read since the RT'07 proceedings!
_________________
The box. You opened it. We came.
(L) [2007/10/11] [Xela] [New Articles on RTRT @the Inquirer + Intel] Wayback!

Guys,


This blog posting is not coming from our group. Guy who wrote it, Jeffrey Howard, is working in the marketing. He neither consulted us nor understands issues he is talking about.


The only expert in all of this publication is Bruce Walter and his posting was precise and up to the point.


It is stupid, but sometimes things are totally out of control.



Cheers,
_________________
[Xela]
(L) [2007/10/12] [jogshy] [New Articles on RTRT @the Inquirer + Intel] Wayback!

I'm gonna kidnap Daniel Pohl and force him and his friends to tell me all about Larrabee Intel secret plans and raytracing! ALL!
_________________
I know I ask too much... but i'm on panic mode and there is no panic button to press [SMILEY stick out tongue]
(L) [2007/10/12] [toxie] [New Articles on RTRT @the Inquirer + Intel] Wayback!

without destroying your visions: larrabee and intel-rt will not solve any of the problems RTRT currently has!
_________________
The box. You opened it. We came.
(L) [2007/10/13] [lycium] [New Articles on RTRT @the Inquirer + Intel] Wayback!

another article: [LINK http://www.beyond3d.com/content/articles/94/1]


unfortunately it too is completely braindead. it's clear the guy wrote that after spending a few minutes googling and becoming an armchair expert on the subject...
(L) [2007/10/13] [Phantom] [New Articles on RTRT @the Inquirer + Intel] Wayback!

I came here this early on the morning to post the same url... You too have a google alert? [SMILEY Smile]
_________________
--------------------------------------------------------------

Whatever
(L) [2007/10/13] [lycium] [New Articles on RTRT @the Inquirer + Intel] Wayback!

nope. how does one set such an alert?
(L) [2007/10/14] [lycium] [New Articles on RTRT @the Inquirer + Intel] Wayback!

oh god, their forum has completely messed up my text and i can't edit posts [SMILEY Sad] in case anyone's interested, i'm cross-posting my "review" of the article here.


----


Some things came to mind while reading this article, and instead of just inappropriately dismissing it I'm going to post some of these thoughts here. I am definitely a big fan of ray tracing (maybe some people will remember my article on RTRT from years ago), but I also strive to be objective.



This is because many people hold up ray tracing as the solution to realistic lighting and as such, they view current rasterisation renderers as used in GPUs and consoles today as inherently inferior.


It is a fact that rasterisation is a technically inferior algorithm in many respects, the most obvious of which have already been mentioned: reflection/refraction, shadowing. These are are impossible to do robustly and without approximations using rasterisation.


Even perspective shadow maps have corner cases, stencil buffering has depth complexity limits, and both are slow; ray tracing provides robust, efficient solutions to both, elegantly. However, even the most biased ray tracing proponent will have to admit that rasterisation has its uses - one of which is the "low cost of entry" wrt processing power. This is why we're using it now (and whoever simply says rasterisation sucks needs to shut up and play Bioshock!), but in the pursuit of greater realism we may just have to change algorithms. As an analogy, an insertion sort is definitely the right algorithm to use in the simplified case where your list is already nearly sorted, however under less constrained circumstances you should use a more suitable algorithm.


Aliasing: yes, this is a downside to using ray tracing, objects can be missed by rays, and supersampling only pushes the problem further out. However, to say that the cost of supersampling is exponential (!!) is of course seriously incorrect.


Static vs dynamic scenes: this is where I most disagree with the article, and is also probably the most important point to consider when discussing real time ray tracing of complex scenes. Recently there have been HUGE successes in accelerating the construction of spatial subdivision structures; one of the most important advances here is the Bounding Interval Hierarchy, as presented by Wächter and Keller. You can build these in the blink of an eye, and the performance is incredibly close to that of a well-implemented k-D tree! What's more is that you can a priori determine your memory usage, and since it's so ridiculously fast to build you can do this lazily... if I can stick my neck out a little with a guess, given dedicated hardware you can just feed it a triangle soup as you might with rasterisation, and it'll be completely transparent to the application.


So already we can build these subdivision structures extremely quickly, and only when needed. As if that wasn't enough, there's a really innovative way to cull triangles from a ray packet described by Reshetov (see [LINK http://www.sci.utah.edu/~wald/RT07/vertex_culling.pdf]), which means your trees can be substantially more shallow - lower build latency, less memory, smaller caches.


These recent improvements have really made a dramatic difference in the applicability of ray tracing to dynamic applications, and is one of the reasons why it's getting so much attention recently - it's not just hype, there's big things happening here.


Global illumination: I think this is a bit out of place when discussing realtime ray tracing. Even sophisticated implementations of Metropolis light transport will struggle in certain difficult situations, like viewing caustics via specular reflection. Since MLT is the most robust rendering method we know (photon mapping coming a close second, often needing Metropolis sampling in difficult settings), one can hardly expect lesser monte carlo methods to do the job reliably in any conceivable situation. Efficiently handling *any* mode of light transport path is a massive big fat unsolved problem in rendering, and until that's solved for offline rendering we shouldn't expect to see it in realtime! Maybe in 10-15 years. For now, just doing constrained solutions (like radiosity) in realtime would be awesome, and Metropolis-sampled instant radiosity as described by Segovia (see [LINK http://bat710.univ-lyon1.fr/~bsegovia/papers/mir.html]) looks really promising.


These proven results suggest that traditional Whitted ray tracing has relatively low lighting and image quality, and requires largely static scenes compared to what we are used to already.


Excuse me, where was that proof? Also, I've yet to encounter a game where there are complex worlds you can smash up at will... why? Because then you'd have to rebuild your BSP tree and that's slow? Hmm, sounds like a familiar problem eh... [SMILEY Razz]


Hopefully the game-changing nature of those rapidly constructed acceleration structures is now sinking in. BIH isn't the only game in town, either: pure BVHs are making a big comeback too. Ray tracing complex, dynamic scenes is a reality that more people should be aware of, and it's a pity this article missed that opportunity.


Not exactly the quantum leap some would like you to believe!


Journalistic point: a quantum leap is exactly the opposite of what people who use it in this context are intending to say!


More to the point: no one is saying "ZOMG ray tracing rules rasterisation sucks let's all switch now!", except maybe that recent Intel blog *grumble*. Even people Inside Intel have said that it was uninformed. What this is all about is The Holy Grail, and it will never be achieved with rasterisation. That much should be obvious, right? It's just a matter of time.


Almost every solution to a problem with ray tracing involves shooting exponentially more rays


Again with the "exponential"... please, can we be a little less dramatic/biased and more realistic? In renderers which seek to provide highly realistic renders you will never see an explosion of ray count with depth, because it's carefully handled by probabilistically following a single path; in faster realtime renderers you could well see a branching factor of two for rendering glass (Fresnel-weighted reflection and refraction), but they will typically cap the reflection depth.


Let's just step back a little and ask what all this is for: precisely computed high-order reflection and refraction! Before complaining about the moderate cost of achieving this - it is minimal, unless someone has an idea where we're all wasting computational resources - maybe we should *ahem* reflect on the fact that it's completely impossible to do with rasterisation and, lacking a point of comparison, focus more on what we're getting for how much extra computation?


The unconsidered bias against ray tracing in the quoted sentence should be clear by now. Yes Sherlock, it doesn't come for free - got any better ways to do it?


However, the logic that insists ray tracing looks better than rasterisation goes way back to the 1970s, when rasterisation consisted of a few Gouraud or Phong shaded triangles while ray tracing was adding shadows, reflection and refraction. Things have changed a lot since then.


How ironic... ray tracing algorithms have also changed a lot just in the last two years, yet that is somehow exempt from being compared against the latest efforts that hardware-accelerated rasterisation can offer?


GPUs today access textures at high speeds, and shader hardware allows textures to be used as global scene data. This approach to global illumination started with shadow maps, a light to surface occlusion term packed in a real-time calculated texture.


1. Yes, isn't it nice having dedicated hardware to do texturing and shading? I tell you, it's not easy to compete against hardware backed with billions of research dollars over the last decade using a general purpose processor whose architecture dates back to the 386 (and has to do everything else too). A nice apples-to-apples comparison for sure [SMILEY Rolling Eyes]


2. Rendering baked light maps isn't "an approach to global illumination", it's not even doing local illumination!


I stress the word baked, lightmaps were never calculated in realtime, that's shadow maps. Shadow maps are certainly not an approach to global illumination, and you're probably aware that these lightmaps weren't computed with rasterisation, but with ray tracing. All the Quake/Unreal games and essentially all terrain lighting/shadowing, is done via ray tracing.


This approach allows most global illumination models to be used with a rasterisation-only renderer; it’s simply a matter of figuring out how to capture the data in a texture.


This is, sorry, pure nonsense. On so many levels.


First of all, there is only one global illumination "model". It's not even a model, it's the solution to the rendering equation. This, compounded with the "shadowmaps == global illumination approach" statement earlier, shows that someone doesn't really understand global illumination.


Next up, if global illumination could be efficiently computed using rasterisation, don't you think people would be doing that, instead of spending hours rendering with Maxwell, Indigo, Mental Ray, Fryrender, Vray, etc.? Of course they would, in a heartbeat! To some extent rasterisation can help, but there's always corners being cut (e.g. a popular approach is photon mapping on the CPU - the real computational core - and then doing the final gather on the GPU).


Finally, global illumination is NOT just a matter of figuring out how to capture the data in a texture!! How patently absurd! How should this texture data be computed, drawing a bunch of triangles? Ray tracing in shader (ray tracing is not especially well-suited to GPUs)? Supposing you do the ray tracing in a shader, how is that rasterisation, and not ray tracing?


In any case, global illumination isn't what you store in a texture and then later display, it's the method you use to compute the data in the first place. So if you write a renderer that uses precomputed global illumination (via ray tracing, obviously) lighting via spherical harmonics or wavelets, that's just displaying the results of a global illumination simulation. If I watch an anime on my computer, it's not actually drawing the anime on its own, it's just decompressing video; same principle.


Anyway, I've written WAY too much already so I just want to conclude with one (rhetorical, as I really must study for my exams) question:


Why all the negative bias against a rendering algorithm that has proven to be so helpful, and that will inevitably be so much more useful in the future?
(L) [2007/10/15] [SonK] [New Articles on RTRT @the Inquirer + Intel] Wayback!

First off i'm not a programmer, im a 3D artist. But i'd like to ask this question to all the programmers in this thread. With everything we know about Intel Larrabee(2.4 GHz, 16 core,64 threads, 1 TFLOPS), would that be enough power to do "Metropolis instant radiosity"(or anything that looks like GI) for that matter, in real time?


the "Metropolis instant radiosity" images look beautiful(though not perfect), and its running on a Core Duo T2600(2.16 GHz).
(L) [2007/10/15] [davepermen] [New Articles on RTRT @the Inquirer + Intel] Wayback!

hm I'm One of those who like low resolutions ... So I'll be happy with some GI on 320x240 hehe
(L) [2007/10/18] [bouliiii] [New Articles on RTRT @the Inquirer + Intel] Wayback!

Just about Metropolis Instant Radiosity, it will be just much much faster with a GPU and interleaved patterns ... And you can use a cache for the shadow maps (with much simpler techniques to handle flickering issues than the technique presented by Nvidia this year at the symposium on the rendering). So, Dirtier but much faster. Today, I am almost sure that ray tracing won't win before many many years. In one or two years, many games will have geometry, subdivision surface, bezier patches and so on .... tesselated on the fly and I cannot see how "coherent ray tracing" (and its worse perversions using frustums --> why don't use directly a rasterizer ??!?) (which is not really ray tracing) with all its limitations can compete with that. Adding one or two mirrors will not provide enough extra quality and people will certainly prefer perfectly fine and displaced geometry than this.


If we want to achieve photorealistic rendering (and handle these f**** flickering issues), we are really far from interactive rendering.


As for Larabee, even with 64 cores, I don't believe it will provide enough power to make a competitive solution for ray tracing but it will be certainly an outstanding co-processor.


PS: and we have to remember that with a geforce7 and a fine tesselation system, we can easily obtain the perf peak with more that 500millions triangles / sec.... and with office, you can easily get 1000 f/s....


PSPS: and ray tracing does not easily handle huge scenes --> how can you deform a huge model? The classical system:

1/ acquisition

2/ tesselation

3/ decimation --> output coarse mesh, normal map, displacement map

4/ On the fly tesselation and Rasterization


With skinning or deformations on the coarse mesh and so on....


This is efficient, cache-coherent and everything you want. Making it in an interactive ray tracing system (with a kind of triangle cache to replace the on-the-fly tesselation) is certainly a pain in the ass. Directly using the huge mesh also seems untractable.


So, if we want that ray tracing  beats rasterization, we have to propose stg else than mirrors, hard shadows, and 10^20 * log(n) ray traversal (and O(n) builds ....). I don't know if Intel guys work on an *off-line* ray tracing system to generate perfect pictures in a fast manner, but it may be a better choice than trying to beat rasterization in the field of real time rendering.
(L) [2007/10/20] [lycium] [New Articles on RTRT @the Inquirer + Intel] Wayback!

the followup article is very interesting! in particular i was excited to read,
(L) [2007/10/20] [toxie] [New Articles on RTRT @the Inquirer + Intel] Wayback!

uhmmmm.. this article also isn't really the most intelligent thing.. e.g.:
(L) [2007/10/28] [toxie] [New Articles on RTRT @the Inquirer + Intel] Wayback!

the thing is that RTRT cannot offer the same quality as GPUs at the moment (as GPUs have an advantage of over one decade as a hardware solution).

and true RTRT hardware is not yet possible as nobody knows how to -really- deal with dynamic geometry.

so no matter how much money and time one will invest into RTRT it will be VERY difficult to beat GFXcards for a while. i would even say it will be difficult to EVER beat GFXboards for dynamic scenes, the only exception being global illumination simulations (as only shooting a shitload of rays per frame can amortize the construction time of the acceleration structure for such huge scenes as currently used in games). and saying that games mostly consist of static geometry is not true. take a look at crysis (okay, this is maybe an extreme example). and in general ppl want to have more freedom in games, meaning more possibly dynamic geometry. and even if it only happens every 5 minutes that a whole building crumbles or a dozen of cars explode, how can anyone cope with this situations without dropping the framerate by a factor of X (which is clearly not acceptible for games)?
_________________
The standards are being lowered, not just on the internet, but in all of news and media.
(L) [2007/10/28] [bouliiii] [New Articles on RTRT @the Inquirer + Intel] Wayback!

I just totally agree with Toxie. We may say that rasterization is a hack but only when we deal with light transport. Otherwise, rasterization is certainly the most elegant way to process geometry. For this reason, I recently decided to switch on the one hand to video game programming (with GPUs of course) in the industry and to prefer off-line and physically based rendering to real time ray tracing for my spare time (unside a serious and well supported open source project). Future will say if it is a good choice.


EDIT: and it is always fun to work with hard constraints:

video games --> make the best technical choices to have the best rendering quality at 60 f/s or stg like that

off-line rendering --> make the best technical choices to have the best pictures in 20 minutes or stg like that
(L) [2007/10/28] [toxie] [New Articles on RTRT @the Inquirer + Intel] Wayback!

so you're into making video-games now? anything you can tell about it? :)
_________________
The standards are being lowered, not just on the internet, but in all of news and media.
(L) [2007/10/29] [toxie] [New Articles on RTRT @the Inquirer + Intel] Wayback!

congratulations! so where can we download the .pdf?  ) ;)


and good luck with the PS3 (which seems to be a bitch to develop for)! drop a message when you have some kind of game-demo to be released!
_________________
The standards are being lowered, not just on the internet, but in all of news and media.

back