Re: Dissapointing building times from lbvh gpu based builder back
Board:
Home
Board index
Raytracing
General Development
(L) [2012/01/16] [tby outofspace] [Re: Dissapointing building times from lbvh gpu based builder] Wayback!spectral: I was not referring to Kirill's frustum paper (which however seems pretty clear to me), but to our
joint work on HLBVH (and to his separate work on "Grid SAH", which I also plan to include in nih over time).
As to SBVH: while it's certainly the algorithm which produces the best trees for a given memory budget,
I believe it's actually possible to get very close to it with a little pre-splitting (see the Grid SAH paper above),
which can also be done effectively in real-time.
So I wouldn't personally spend much time on optimizing SBVH itself: at best, it will give a 5-10% improvement
at a 200+% build cost.
straaljager: it's just yet another item on my (lower priority) todo list.
I plan to write something about it (probably a technical report), but I don't know when,
and most probably it won't be in the next 3 months.
For now, I can just say it's based on an efficient CUDA implementation of parallel range queries.
(L) [2012/01/17] [tby dbz] [Re: Dissapointing building times from lbvh gpu based builder] Wayback!>> outofspace wrote:
Thanks for your comments.
 >> outofspace wrote:And then now that real-time ray tracing is a solved problem (thanks to Kirill), there are plenty of other
interesting things in my todo list.
A Whitted integrator may run in real-time but I don't think ray tracing in general can be done in real time. Some global illumination
effects still take many hours to render on either cpu or gpu. Or did Nvidia invent something that renders physically correct global illumination
in less than 10 ms as well?
(L) [2012/01/18] [tby outofspace] [Re: Dissapointing building times from lbvh gpu based builder] Wayback!Have I ever mentioned global illumination or full light transport simulation ?  [SMILEY ;)]
I just said ray tracing. It's now possible to solve primary and a little secondary visibility
on fully dynamic models at rates that were previously possible only with rasterization, that's it.
The rest is pretty much a matter of time, letting processors and sampling/reconstruction
algorithms evolve. From a purely computational point of view, I suspect it will take at least
10 years before we'll have the raw compute power needed to run fully realistic light transport
in real-time (and that's without taking into account increasing display resolution).
Possibly 15 or 20.
What I am saying is just that I am now a little less interested in research on basic ray tracing.
p.s.
This has nothing to do with this thread, but: I am getting really bored by the words
"physically correct" - Feynman is probably laughing really hard whenever graphics folks say that -
shouldn't we all switch to "physically based", "physically inspired" or "statistically correct" instead?
(L) [2012/01/18] [tby spectral] [Re: Dissapointing building times from lbvh gpu based builder] Wayback!>> outofspace wrote:This has nothing to do with this thread, but: I am getting really bored by the words
"physically correct" - Feynman is probably laughing really hard whenever graphics folks say that -
shouldn't we all switch to "physically based", "physically inspired" or "statistically correct" instead?
I agree... we all know that there is no physically-correct engine... even if some claims [SMILEY :-P] They just approach to the physically correct...
Like you... I prefer 'physically based'... it is more appropriate [SMILEY :-)]
But anyway it is just a question of terminology, but you're right !
(L) [2012/01/18] [tby dr_eck] [Re: Dissapointing building times from lbvh gpu based builder] Wayback!@outofspace  Thanks for the pointer to Garanzha's HLBVH paper.  [LINK http://garanzha.com/default.aspx] The BVH build times are remarkable.
You can also add me to the list of supporters for "physically based".
(L) [2012/01/18] [tby ingenious] [Re: Dissapointing building times from lbvh gpu based builder] Wayback!How about a list of "unbiased" haters? Count me it  [SMILEY :twisted:]
(L) [2012/01/19] [tby graphicsMan] [Re: Dissapointing building times from lbvh gpu based builder] Wayback!I don't hate unbiased.  I think it's a useful concept.  The trouble is when people think that unbiased means anything without stating the problem being solved.  Or to assume that biased solutions must be worse.
(L) [2012/01/19] [tby dr_eck] [Re: Dissapointing building times from lbvh gpu based builder] Wayback!Both consistency and bias are important concepts for a physically based ray tracer.  Just for review:  a consistent ray tracer will converge to the correct solution if enough rays are traced.  Fewer rays will only result in noise.  An unbiased ray tracer will give a result that is statistically accurate, even if a "small" number of rays is traced.
For example, photon mapping is biased.  As Jensen writes on p. 53 of his book, "The price we pay for using density estimation to compute illumination statistics is that the method is no longer unbiased.  This means that the average expected value of the method may not be the correct value.  However, the technique is consistent which means that it will converge to the correct result as more points/photons are used."
(L) [2012/01/19] [tby outofspace] [Re: Dissapointing building times from lbvh gpu based builder] Wayback!Way too much importance is given to unbiasedness.
Consistency is completely fine if there is a way to control the error.
Remember that "statiscally accurate" doesn't buy you anything if all you get is noise.
In other words, if the amount of error you get in the form of bias+variance is the same
or less than the error you'd get in the form of variance otherwise, then simple consistency
can even be a win.
And in practice, robust light transport simulation can only be done with consistent methods -
unbiased sampling techniques (based on local path sampling) cannot capture all transport paths.
So as long as your renderer supports specular reflection and transmission, you cannot even claim
statistical correctness with any unbiased method - or at least not on all the scenes a user could create
(or on any of the scenes you see in daily life - any light bulb in any room will generate tons of
SDS paths - or paths that for all practical matters are SDS, even if you model the real camera aperture
and the real tungsten filaments - or the sun, the probabilities are just close to infinitesimal if you use
bidirectional path tracing).
(L) [2012/01/19] [tby graphicsMan] [Re: Dissapointing building times from lbvh gpu based builder] Wayback!On the other hand, understanding bias and it's sources is important.  For example, proper importance sampling is unbiased, and doing it wrong can have important consequences.  Even in algorithms such as photon mapping that have bias, we still try to limit the sources of our bias, and reduce those that remain to present the smallest amount of bias possible while retaining the benefits of introducing bias in the first place.  If you're not careful, consistent also goes out the window.
(L) [2012/01/19] [ost
by outofspace] [Re: Dissapointing building times from lbvh gpu based builder] Wayback!Sure - some care is needed in everything one does.  [SMILEY ;)]
(L) [2012/01/19] [ost
by ingenious] [Re: Dissapointing building times from lbvh gpu based builder] Wayback!>> dr_eck wrote:An unbiased ray tracer will give a result that is statistically accurate, even if a "small" number of rays is traced.
"Statistically accurate"? And what does that mean? I can easily give you an example where a biased estimator is by all statistical means more accurate than an unbiased estimator of the same value with the same number of samples [SMILEY :)]
Actually there's no need for me to give you an example - on many scenes irradiance caching produces a solution that is closer to the true solution than a path tracer with either the same number of samples or same rendering time, you choose. It is plain wrong to claim that a random realization of one estimator will be closer to the true solution than a random realization of another estimator, just because the expected value of the first one is equal to the true value. Come on, people! Not to mention that the central limit theorem only works for a "sufficiently large number of samples".
Now, consistency is very important indeed, and I agree with outofspace and graphicsMan. If your estimator is not consistent, then there's a good chance in that future, when the number of samples taken increases due to better hardware, you method will become useless, as it doesn't give a better solution with more samples. And that's what matters more.
(L) [2012/01/21] [ost
by outofspace] [Re: Dissapointing building times from lbvh gpu based builder] Wayback!Yes - this is a fine point, that many in graphics don't seem to get.
All that unbiased means in practice is that, thanks to the Central Limit Theorem,
an infinite sum of finite runs will converge to zero error.
(it also means that each run has an expected error of zero over infinitely many realizations,
but this in practical terms has no value at all - all that matters is the above).
Which is pretty much the same definition of consistency: the limit value of the algorithm's
output has zero error.
So in the real world, all that matters is (a) having a convergent algorithm, (b) being able to run algorithm
indefintely (e.g. the algorithm's resource usage shouldn't grow with time).
Once these two points are satisfied, the only important factor is convergence speed.
What unbiasedness often gives compared to pure consistency is just a mathematical way to
reason about the convergence speed in terms of probability theory - but if you can get that by
other means, there's no advantage at all.
(L) [2012/01/22] [ost
by dbz] [Re: Dissapointing building times from lbvh gpu based builder] Wayback!>> outofspace wrote: From a purely computational point of view, I suspect it will take at least
10 years before we'll have the raw compute power needed to run fully realistic light transport
in real-time (and that's without taking into account increasing display resolution).
Possibly 15 or 20.
That is also what I fear. There have been a few gpu path tracing demo's on very simple scenes which fit in constant memory that more or less run in real-time but other than that the prospects are rather grim. Initially, I got quite excited by gpu path tracing but I see nowhere near the 10 - 100x times speed-up as promised by some gpu manufacturers except for simple scenes or scenes with a very limited number of bounces per ray.
(L) [2012/01/24] [ost
by trierman] [Re: Dissapointing building times from lbvh gpu based builder] Wayback!Hi guys, i would agree 320 ms for a high quality is pretty great.
I used quite some time implementing the first hlbvh from Jacopos initial paper. And my head almost exploded during the codeing .. head to node.. segment heads and so on.
But we managed to make it run pretty well both in cuda and opencl, and dont get me wrong it was a really good paper and we use it for a lot of projects.
And i just started working on the work queue version and it seems to be a lot simpler and i will report some timings when i get there.
I am also looking forward to review the nih framework it is a great inspiration for us mortals.
(L) [2012/01/25] [ost
by keldor314] [Re: Dissapointing building times from lbvh gpu based builder] Wayback!>> outofspace wrote:All that unbiased means in practice is that, thanks to the Central Limit Theorem,
an infinite sum of finite runs will converge to zero error.
(it also means that each run has an expected error of zero over infinitely many realizations,
but this in practical terms has no value at all - all that matters is the above).
This can be quite an important distinction if you think of it in terms of a render farm - with an unbiased algorithm, you just have each node separately render the scene at a lower quality and then average the results.  With an unbiased (but nonetheless consistent) renderer, it's more complicated.
(L) [2012/01/25] [ost
by spectral] [Re: Dissapointing building times from lbvh gpu based builder] Wayback!It seems interesting... I will give a try too implement a high quality BVH builder in OpenCL too.
You say that you have implemented it in OpenCL and CUDA, why both ? Do you have encounter some performance difference between CUDA and OpenCL ? Have you been able to use the HLBVH on the CPU too ?
Thx
(L) [2012/01/25] [ost
by keldor314] [Re: Dissapointing building times from lbvh gpu based builder] Wayback!Optix now has a GPU based HLBVH builder, as of version 2.5.  Has anyone tried to benchmark it yet?
(L) [2012/01/25] [ost
by dr_eck] [Re: Dissapointing building times from lbvh gpu based builder] Wayback!>> Optix now has a GPU based HLBVH builder, as of version 2.5. Has anyone tried to benchmark it yet?
Does anyone know if it is work queue based or original?
(L) [2012/01/25] [ost
by trierman] [Re: Dissapointing building times from lbvh gpu based builder] Wayback!We just had a visit from David Macallister, (nice chap) and i think he mentioned that the bvh builder in current optix is in fact the work queue version.
(L) [2012/01/25] [fursund] [Re: Dissapointing building times from lbvh gpu based builder] Wayback!>> spectral wrote:You say that you have implemented it in OpenCL and CUDA, why both ? Do you have encounter some performance difference between CUDA and OpenCL ? Have you been able to use the HLBVH on the CPU too ?
We implemented the original HLBVH paper in both OpenCL and CUDA, to investigate the possibilities (and performance difference) in using AMD as well as NVIDIA GPU's. We haven't investigated how well the method works on CPU, but hopefully we'll get to it in the future.
(L) [2012/01/26] [spectral] [Re: Dissapointing building times from lbvh gpu based builder] Wayback!Great,
And how does it peform on ATI GPU , no problem with the barriers ?
(L) [2012/01/26] [fursund] [Re: Dissapointing building times from lbvh gpu based builder] Wayback!>> spectral wrote:And how does it peform on ATI GPU , no problem with the barriers ?
Yes indeed quite a few problems with barriers:). We might release some performance info at a later time.
(L) [2012/01/26] [voidcycles] [Re: Dissapointing building times from lbvh gpu based builder] Wayback!Any chance of getting access to the HLBVH OpenCL version for academic use?
(L) [2012/01/26] [fursund] [Re: Dissapointing building times from lbvh gpu based builder] Wayback!>> voidcycles wrote:Any chance of getting access to the HLBVH OpenCL version for academic use?
No I don't think that's possible, sorry.
(L) [2012/01/27] [mpeterson] [Re: Dissapointing building times from lbvh gpu based builder] Wayback!>> dr_eck wrote:Optix now has a GPU based HLBVH builder, as of version 2.5. Has anyone tried to benchmark it yet?
Does anyone know if it is work queue based or original?
i think it is the work queue thingy because the original one has strong cpu dependencies.
back