Disco Stu lying on the floor and looking at the disco lights back
(L) [2006/05/23] [greenhybrid] [Disco Stu lying on the floor and looking at the disco lights] Wayback!It's been a while since the last visual here, so I thougth I present you the first images of the glass++ - raytracer.
It's the second rewrite of my glass tracer and I called it '++' because it's object-oriented to the bones [SMILEY Twisted Evil]
I may write another "glass without ++" some time in glorious future, which would be rtrt.
a short version historyglass Version 1 meant first time kd-tree (code went pasta)
glass Version 2 meant first time monte-carlo (code went pasta, again)
glass Version ++ now means photon-mapping and distribution (trying to not let it get pasta;))
it took nearly 2 hours to render those pics (I know it's not that fast, I only implemented a kd-tree for the photonmap, not for thescene; and kd-depth is just max 20 recursions or less than 20 photon-things, I tried at 30/10, worked fine for less photons, but for these pics, windows said "dude, your harddisk ... it's full" (meaning my tracer bounced 512Megs Ram+800Megs Harddisk) )
[IMG #1 ]
color bias = +/- 0
[IMG #2 ]
color bias = -1
[IMG #3 ]
color bias = -2
[IMG #4 ]
color bias = -50 (!!)
EDIT: btw the photon-map kd is naive, I'm a bit afraid of sah-ing it because of the scene-compile-time
_________________
[LINK http://greenhybrid.net/ greenhybrid.net]
Real Men code Software Graphics.
[IMG #1]:Not scraped:
https://web.archive.org/web/20071020183956im_/http://www.root-engine.net/greenhybrid/images/glasspp_log_0001.jpg
[IMG #2]:Not scraped:
https://web.archive.org/web/20071020183956im_/http://www.root-engine.net/greenhybrid/images/glasspp_log_0002.jpg
[IMG #3]:Not scraped:
https://web.archive.org/web/20071020183956im_/http://www.root-engine.net/greenhybrid/images/glasspp_log_0003.jpg
[IMG #4]:Not scraped:
https://web.archive.org/web/20071020183956im_/http://www.root-engine.net/greenhybrid/images/glasspp_log_0004.jpg
(L) [2006/05/23] [lycium] [Disco Stu lying on the floor and looking at the disco lights] Wayback!it seems you're not conserving energy on those reflections ;) each time light bounces it gets brighter, wheras in real life the total reflectance is always < 1. oh and instead of biasing your image to try and deal with the problem, you might employ something like this (a good stopgap before learning about tone mapping in general): [LINK http://freespace.virgin.net/hugo.elias/graphics/x_posure.htm]
but now, the real questions: that memory usage... 512mb with paging?! 2 hours rendertime, without antialiasing? surely there must be a few bugs around...
PS. for a scene like that something like an 8x8x1 grid rather than a k-d tree might be appropriate- there isn't enough geometry to justify the overhead.
(L) [2006/05/23] [greenhybrid] [Disco Stu lying on the floor and looking at the disco lights] Wayback!I'm not apllying any energy on the photons(this is one of the next targets), right [SMILEY Wink]
But I don't see they are getting brighter!? Hmm, thinking about it, I'm wondering why the bias-50 image shows white reflections in the light spurces, this might be the point...but on the other hand, in these pics the photon-map-generation stops at the primary-intersections of the photons, so no bouncing appears...but yepp, it's buggy at the moment^^
Interesting link, lyc, I'm just reading it!
EDIT: btw huge amount of memory comes from the >8byte-kd-nodes.
and, lyc, I eplored your link and read the content,  is it physically correct?
_________________
[LINK http://greenhybrid.net/ greenhybrid.net]
Real Men code Software Graphics.
(L) [2006/05/23] [lycium] [Disco Stu lying on the floor and looking at the disco lights] Wayback!that site is full of gems, and the author is a really nice guy too :)
about your photon mapping stuff: the first and foremost problem i think you have is that you're storing photon hits on the surfaces of the reflective spheres. mirror-like surfaces only reflect light in a single direction, and in real life where all your computations have infinite precision, there's zero chance of an incoming photon supplying the relevant information about light coming from the reflected direction. this is why jensen prescribes only storing photons on non-perfectly-specular surfaces: it's super cheap to sample the incoming light in that case, just trace 1 extra ray. photon mapping really helps in the case of diffuse and glossy reflections, where you'd need to otherwise sample lots of rays: the photons hanging around the point of interest give you an approximation of that incoming light without having to trace the rays! so in the simplest case you just do a radiance estimate (at the non-specular surfaces, otherwise bounce until you hit one) like he describes, and you get your approximation.
problem is, this looks shit :P i really dislike the overly-smooth look of photon mapping to begin with, but this just adds splotchiness (as if that's a word) into the mix. the common solution to this is to do a final gathering pass: you trace a *whole lot* of rays over the hemisphere and sample the photon map where they hit. this smooths out the result and hides a lot of the ugliness, and the mass tracing of rays is perhaps a rare instance where one could possibly extract some ray coherence for some // tracing action (many many rays, possibly similar directions, no bouncing in the absence of specular surfaces, etc).
(btw, i'm guessing from those pics that you divide by the squared distance when sampling those hemisphere rays- this is incorrect, the process of sampling over directions includes that effect since the further away the objects are, the less often they'll contribute their brightness.)
(L) [2006/05/24] [Guest] [Disco Stu lying on the floor and looking at the disco lights] Wayback!I think there's a bug in the reflection color-calculation...
After getting the basics of photonmapping I think I'll deep-explore the page of the guy you mentioned above, got a glimpse on his stuff, very very nice:)
(L) [2006/05/24] [lycium] [Disco Stu lying on the floor and looking at the disco lights] Wayback!about the physical correctness of the exposure function: in a sense it's trying to emulate the process of how real film works, and i see no reason why this approximation couldn't be called "physically correct". however, all these techniques for compressing the huge dynamic range into the limited gamut which crt monitors can display (while still being perceptually "scale preserving") are hardly natural, in the sense that the correct thing to do is have hdr monitors that will blind you when you look at the above images ;)
hugo's simple solution is a good one, and there are other good ones- though less "physically inspired"; for example, the tone mapping operator of reinhard et al works really really well: [LINK http://www.cs.utah.edu/~reinhard/cdrom/]
in the end, i would recommend going with something that works well and is simple to implement; the complexity of the human eye is far beyond that of photon mapping and path tracing (esp the immensely complex processing done by the brain)!
(L) [2006/05/24] [tbp] [Disco Stu lying on the floor and looking at the disco lights] Wayback!At this point it's a matter of taste (or what kind of output property you're after), but i find Reinhard's operator to look just plain dull & awful.
PS: visual processing done by the brain isn't immensly complex as the machinery itself fits the cranial volume (and that's a top limit) [SMILEY Wink]
(L) [2006/05/24] [lycium] [Disco Stu lying on the floor and looking at the disco lights] Wayback!the visual processing done by the brain is huuuugely complex! for example, most people aren't even aware that there's stuff floating around inside their eye gel, because the brain totally filters that out. the fact that we can still read a book half-shadowed and half completely lit by the sun, without much difficulty, is another wonder taken for granted.
in general, the eye/brain machinery present in many animals is among the most developed things on this planet, many regard it as a close 2nd to the brain on its own.
(L) [2006/05/24] [tbp] [Disco Stu lying on the floor and looking at the disco lights] Wayback!It's certainly complex, still that processing only require ~2kg of organized matter and the bootstrapping (genome) code fits on a CD-ROM.
It doesn't help our understanding that a) you have to use a brain to understand a brain b) it doesn't seem to work like any machinery we build - ie puters; plus as humans we like to think we're the top of the line (which only tells how ignorant we are).
(L) [2006/05/24] [lycium] [Disco Stu lying on the floor and looking at the disco lights] Wayback!hmm, this aside is already getting a bit long ;) firstly, i don't think weight is a good measure of complexity (eniac ;), and lastly we are kinda top-of-the-line for a very simple reason: we can draw ideas right out of thin air. gödel's incompleteness theorem is a perfect example of this- no machine will ever give you such a result, by the result itself! this highlights the very basis for this mystery- where do our ideas come from?
[i don't really want to go down this metamathematical/philosophical path, since any discussion stemming from what one reads in "gödel, escher, bach" usually ends in flames ;) but i had to highlight the fact that in a certain sense, our brains really are top-of-the-line, and very very special despite the moderate weight ;)]
(L) [2006/05/27] [lycium] [Disco Stu lying on the floor and looking at the disco lights] Wayback!quick thing: though i did say "use jensen's code" above, i also feel that writing one's own code to match what's out there is the best/only way to learn. i suggested using pre-cooked code only because when one's at the "get it working" stage, take-no-prisoners simplifaction is probably a good idea. he can then make a smooth transition from what he knows is a correct, working solution to his own (besides having a performance baseline to work with).
back