Method of partitioned depth buffers back

(L) [2007/05/21] [lycium] [Method of partitioned depth buffers] Wayback!

[WARNING: long preamble!]


Some of you might know that I've been on a little side quest (from ray tracing) for the last year or so: the art and science of rendering high quality 3D fractals. Unfortunately, for reasons which will soon be explained, the best fractals do not lend themselves to ray tracing, so I've been forced to use depth buffering. The reason I'm posting here is similar to the reason I posted my last little report here: there is significant overlap in the methods required, and us folk who like high quality rendering might appreciate the results ;)


The principal difficulty in rendering these fractals efficiently (ignoring accuracy problems which naturally arise from trying to compute normals on a fractal) is that the geometry "doesn't exist"; with that I mean that it is not random access, and is produced by a sequence of complex procedures depending on many decision variables. Since these fractal iterations are usually extremely computationally expensive, cumulatively moreso the deeper one iterates, the goal becomes using every sample generated to the fullest extent possible. Previously I'd just allocated all my depth buffers at once, and updated each one as the geometry was generated; not only does this consume copious amounts of memory, but also the access patterns are extremely incoherent and each sample touches many cachelines.


That brings us to the stage I found myself in October 2006, the results of which can be seen here: [LINK http://ompf.org/forum/viewtopic.php?t=134]


Since then I've been wondering what to do about the horrendous memory consumption, and how to further improve the rendering quality and feature set. After posting that thread I've had the pleasure of reading the paper by Bala et al. on Lightcuts[1], and it's really one of the most innovative rendering methods I've ever encountered. They make explicit the sensor<->source pairing while rendering, and carefully analyse the error incurred by approximating it. At the time of reading, I must admit the paper didn't make much sense to me. However, recently I was thinking again about the memory usage problem, and asked if it were possible to not have the entire set of depth buffers in memory at all times. The question then becomes, "how do I know which parts of which buffers are relevant to rendering at any given moment?" HANG ON A MINUTE, didn't I read a paper about just that the other day?! Indeed I had :D


The trouble is that the paper deals with ray tracing, which I love with all my heart and soul but unfortunately isn't the method to render these fractals. So I gave the paper a read again, this time with my specific problem in mind, and before long an exceptionally beautiful and natural extension came to mind! It has to be one of the happiest moments of my life :) *ahem* Sorry, but it doesn't often happen to me, and I was just elated!


In any case, it turns out that one can sort of "integrate" [insert neat heuristics here] this importance function over an area if you consider certain things constant (like for example visibility), and with a few terms taken into account this becomes surprisingly effective! :D The whole thing was really difficult for me to implement, with the time constraints imposed by my shockingly poor university schooling, but eventually the method showed some signs of actually working! Moreover, it was much much faster than the original algorithm, was trivially parallelised, and allowed for a near-infinite number of lights.


The biggest hinderance to the efficiency of this new, modified depth buffering method is that of sampling efficiency: when your buffer gets smaller and smaller the chances of random decision variables producing geometry visible to the depth buffers decreases very quickly. However, being an old hand at the sampling game, I knew well what the right tool for the job was: Metropolis-Hastings sampling :) I have some neat equal-time comparison images to show the incredible difference this makes. Furthermore, in implementing the method along the lines of Kelemen et al.'s fantastic paper[2] (doing the sampling in the primary sampling space, the unit cube of all random variables) I noticed that I could very brutally approximate my contribution/scoring function (actually I've tried many of them, and am unsure of which is best overall) because the exact stationary distribution doesn't matter here! Why? Because depth comparisons are a < operation, whereas with ray tracing you need to accumulate with exactly the right density to produce the correct image.


So now we're up to two cool contributions, and it's starting to look like a paper should really be written! However, all is not so rosy...


A difficult and scene-dependent parameter to set is the size of the depth buffer partitions; ideally one would like the entire working set to remain in l2 cache, however if one makes them too small then the M-H sampling algorithm increasingly fails to find useful decision variables for the fractal process. There is therefore a tradeoff between raw speed and efficiency, and what's more, L2 cache is king for this algorithm. I've made a preliminary graphs of partition size versus rendered image time, and the relationship is rather complex; I'll have to wait until I have my Athlon x2 again (when I finally get to New Zealand) to compare its 2mb of L2 against the C2D's 4mb. After that I can begin to systematically explore the problem of choosing the partition size, which would be a rather glaring omission in the paper-to-be.


The other difficulty is how to handle global illumination. I have absolutely no clue about how to do this :( I've implemented (a variant of) the extended Lightcut algorithm, multidimensional Lightcuts, but the generalisation potential seems to be coming at an increasingly high cost: just implementing extra dimensions for the lens positions has taken me forever, and that's by far not the most difficult of the other dimensions: to add motion blur, I need to find an approximation for the overall sensor<->source pair contributions over a 7D volume (2x lens, 2x pixels, 2x light, 1x time), without any kind of numerical integration. To use numerical integration would probably be really expensive, and begin to erode the speed gains from using the method... so more work has to be done here too.


[End long preamble]



I didn't post this in the Visuals section without good reason :) My first decent test image, a fractal of moderate complexity designed by my friend Aexion (see [LINK http://aexion.deviantart.com/] for more of his incredible works) illuminated with a HDR light source (Debevec's beach probe, approximated by 1500 or so point lights by the median cut algorithm) and rendered with depth of field, can be found here: [LINK http://www.deviantart.com/deviation/55515150/]


To have some pixels at all in this post, here's a small preview (the link above is an uncompressed 1920x1200 version):


[IMG #1 ]


I'll post here again when I've produced a short test animation, a sort of demo for which I'll probably steal some demoscene music :/ That will have motion blur as well, in spite of the difficulties explained above (I will use numerical integration to measure its impact on rendering performance).



References:


[1] [LINK http://www.graphics.cornell.edu/~bjw/lightcuts.pdf]

[2] [LINK http://www.iit.bme.hu/~szirmay/paper50_electronic.pdf]
[IMG #1]:Not scraped: https://web.archive.org/web/20071029234329im_/http://www.fractographer.com/wip/23_preview.jpg
(L) [2007/05/21] [beason] [Method of partitioned depth buffers] Wayback!

Well, let me be the first to say... that is awesome. What sort of reflection model is it? Looks like diamond or something. What was your rendertime?
(L) [2007/05/21] [lycium] [Method of partitioned depth buffers] Wayback!

reflection model is seriously ad hoc, but was chosen because it looks purdy and was very easy to fit in the lightcut-based framework: just a cone around the normal :| fortunately i have a little bit of relief in this respect because of the fact that i'm rendering fractals: just how did you expect it to look? ;)


render time is roughly "overnight" (although that's starting at 8am, my bedtime that day) on a 2ghz core 2 duo laptop. let's say 8 hours, but bear in mind that i tuned the rendering quality to account for my sleeping time :P
(L) [2007/05/22] [slartybartfast] [Method of partitioned depth buffers] Wayback!

Lycium : I just checked out you're gallery - I was blown away ....  [SMILEY Shocked]


A long time ago (1986), I was pretty much the first person to play with computer generated fractals in my college - even though I was an undergraduate at the time, I gave a talk to the staff to explain what fractals were !! I generated fractal mountainscapes, planets, Mandelbrot and Julia sets. The harware I used was basic by today's standards.


Despite the fact that technology has come a long way in the last 20 years, I can safely say those are some of the most amazing fractal pictures I've ever seen. Most images I've seen actually look like mathematical functions - cold, precise, perfectly rendered. Your images are different - they look completely organic, painterly and with a great artistic flair. I wouldn't mind having one or two of them hanging on my wall.


Well done. You should be proud.


Slarty.
_________________
S. Hurry, or you'll be late.

A. Late ? Late for what ?

S. Late. As in "The Late Dent-Arthur-Dent"
(L) [2007/05/22] [lycium] [Method of partitioned depth buffers] Wayback!

that's much appreciated mate :) i have to say, there are other people on deviantart who are lots more capable in the 2d arena: they use a (delphi) program called apophysis, which is opensource and has a really powerful gui to allow for very refined and very fast experimentation. that of course makes a world of difference, as you can see in the following galleries (i jokingly refer to them as the "apo high council"):


[LINK http://zueuk.deviantart.com/]

[LINK http://psion005.deviantart.com/]

[LINK http://halcyon83.deviantart.com/]

[LINK http://mobilelectro.deviantart.com/]

[LINK http://joelfaber.deviantart.com/]

[LINK http://michaelfaber.deviantart.com/]


unfortunately, my gui-less 2d programs (used to make all the 2d images in my gallery) are very limited in that they are controlled entirely via code, so i haven't managed to wring the best out of them yet. i've a bunch of generlisations of the apophysis rendering method implemented actually! you can read about that here: [LINK http://flam3.com/flame_draves.pdf]


however... 3d fractals are quite a different story, and i don't see any of those apophysis dudes following the crazy path i've taken ;) the question there is, how to make a useful 3dsmax-like interface for designing fractals? anyway, first i need free time :(


two of my friends on these forums, darnal and greenhybrid, have also implemented flame-fractal renderers (besides 2d ray tracers ;). most of darnal's flame renders aren't on deviantart (choosing instead to show off his new path tracer, heh), but gh's are:


[LINK http://greenhybrid.deviantart.com/]

[LINK http://darnal.deviantart.com/] (i link it anyway for the beautiful caustics!)



ps. i do have some of those images at poster res (typically around 13k by 8k), specifically dreamscape, genesis, oneiric (increeeedible details), the sierpinskahedron, temptress and probably a few others if i look around ;)
(L) [2007/05/22] [greenhybrid] [Method of partitioned depth buffers] Wayback!

heh, like already said on da, this is so f*ing awesome, and I f*ing fully agree with slartybartfast, your work so far is abs() breathtaking [SMILEY Very Happy]


...what shall people now think of me if they visit my poor gallery when they compare to yours  [SMILEY Embarassed] [thx for linking over nevertheless [SMILEY Wink]]


Sadly im out of disucssion since I haven't read the lightcuts-paper yet, but leave me PM/GMail/etc. if you have teh paper [SMILEY Sad]



edit: typo...
_________________
[LINK http://greenhybrid.net/ greenhybrid.net]

Real Men code Software Graphics.
(L) [2007/05/22] [lycium] [Method of partitioned depth buffers] Wayback!

there's a direct link in the post, specifically the references ;) alternatively, you could google for "lightcuts" ;) my monthly bandwidth is severely constrained until i get to nz, otherwise i would send it (am always happy to help where i can in coding/graphics matters).


ps. you compare yourself to me way too much! there is nothing that falls short about your gallery, and by following things on which i have a head start from just chatting with me online, you can't expect the same results in such a short time (your higher iq notwithstanding ;)... i have dedicated several years to these things, and am therefore expected to be good! what a waste it would be otherwise, no?
(L) [2007/05/22] [greenhybrid] [Method of partitioned depth buffers] Wayback!

hehe, I guess you have the bigger one, but to cite 'someone', "it's just a number" [SMILEY Wink]


[prolly] Actually I was one of the first here (looking at the number of watches then) who read the whole preamble! [/prolly]  What I meant wasn't the lightcuts-paper (I have it here [and at work, but pshh, don't tell boss]), I meant yours! Keep on, ranger!


And with the gallery, I just was a bit self-ironic, I know it's not that bad (It simply can't be bad after having you as my fractalic mentor [SMILEY Smile] ) [SMILEY Wink] (is that the proper english term?). Even the toxic avenger here liked/likes it, he said once [SMILEY Dancing]


What I wanted to say, too: If that's the lycium with no time, who's the lycium *with* time?
_________________
[LINK http://greenhybrid.net/ greenhybrid.net]

Real Men code Software Graphics.
(L) [2007/05/22] [beason] [Method of partitioned depth buffers] Wayback!

this is definitely siggraph art show worthy (or worthier). i don't know how that works but i guess it would be a publication? a paper would be better probably. i dont know if you can do both.


edit: disclaimer: alas, i am no authority on what is siggraph worthy [SMILEY Smile]
(L) [2007/05/23] [toxie] [Method of partitioned depth buffers] Wayback!

to be sarcastic: everything is siggraph worthy that just looks good. ;)


(but this here seems to feature some cool tricks and math, too, so go for it!!)
_________________
The box. You opened it. We came.
(L) [2007/05/23] [lycium] [Method of partitioned depth buffers] Wayback!

actually i've never thought about that, however i might have difficulty getting into the us perhaps: i only have a european passport (does eurographics have a similar thing?).


if it's genuinely possible* i'd love to do it, however i have no idea where to start [SMILEY Confused] any advice at all would be very much appreciated!



* ppl familiar with the standard of application must pls tell me straight if it's not hq enough! and of course i'd only go there with more than just 1 test render: at least a handful of images, an animation, and a fast machine to show it rendering where ppl can direct the fractal [SMILEY Very Happy]
(L) [2007/05/23] [fpsunflower] [Method of partitioned depth buffers] Wayback!

Very nice work lycium, you should definitely consider showing off some of this stuff at Siggraph. Although I think the deadlines have mostly all passed for this year.


You should be able to travel to the US for the duration of the conference without any trouble ... its trying to work here that can be hard. But any other foreigners that have travelled to the US might be able to tell you more about that.


I would recommend putting together a "sketch" as Siggraph calls them describing your renderer and any new algorithms you have. You should also submit any original artworks you made with it to the art gallery ... its generally really friendly to this kind of mathematically inclined stuff. I know for instance the "electric sheep" guy presented his work in a sketch a couple years back. If I were you I would browse the archives and get an idea for what kind of stuff people present. There's usually one or two session each year that cover the kind of work you are doing (math/art programs).


I'm guessing Eurographics has a similar event, like "short papers" or "posters" for this kind of work, but I've never been.


Oh and if you have any animations rendered with these methods you should submit them to the animation theatre part of Siggraph too. You can either have a video with voice over explaining fractals / your rendering algorithm or just a totally abstract piece with morphing fractals. I've seen both - and they don't have to be that long (check the rules).
(L) [2007/06/05] [Wussie] [Method of partitioned depth buffers] Wayback!

Woah, this looks pretty good! [SMILEY Very Happy] These kind of renders are one of the main reasons I've frequently kept an eye on the forums here, but right now I just realized that I didn't even register for an account yet, and what's a better opportunity then now. [SMILEY Very Happy]

I'd love to read up some and post some educated words, but I have to be off to attend Phantom's class in two minutes. Keep it up and good luck! Keep us posted on the Motion Blur issues, I'm very curious how that'll turn out.
(L) [2007/06/05] [lycium] [Method of partitioned depth buffers] Wayback!

i think phantom might have the coolest job out of anyone on these forums :P depends what you're after tho, working for wächter/keller for me if i can handle the pace! but i love teaching, perhaps too much as some ppl on these forums know :P


thx for the kind words wussie ;) right now what i'm battling with is the issue of "time caustics": small objects moving really fast along complicated paths. conservative (maximal) heuristics for bounding the motion of the obj along these paths by sampling are both expensive and ... very conservative, lots of false positives :| those fractals are really expensive to compute, so following the fractal iteration for a bunch of decision variables that don't lead to a visible result undermines the whole scheme!
(L) [2007/06/07] [Ono-Sendai] [Method of partitioned depth buffers] Wayback!

Nice image.
_________________
[LINK http://indigorenderer.com/]
(L) [2007/06/07] [lycium] [Method of partitioned depth buffers] Wayback!

much apperciated master ono, i know your eyes have seen quite a few! :)

back