Implementing a Shader System back
Board:
Home
Board index
Raytracing
General Development
(L) [2014/02/25] [tby janzdott] [Implementing a Shader System] Wayback!Hi I'm new here.  I'm glad I found a ray tracing forum because the couple questions I had earlier went unanswered due to a lack of a good place to ask them.  I've been learning things on my own via articles and papers online.  No books yet.  Any recommendations?  "Physically Based Rendering:  From Theory To Implementation" looks like it would be a good resource.
I started working on my ray tracer two or three weeks ago.  It has all the basics:  BVH, multithreading, point lights, area lights, and global illumination.  It runs on the CPU and doesn't use SSE.  I may use SSE down the road, but it looks very troublesome to get it working optimally.  I've been thinking about utilizing the GPU as well as the CPU.  Do many ray tracers use both?
Now that I have the basics done, I'd like to add a shader system.  How do most ray tracers handle this?  I've looked into Open Shading Language.  It looks nice but there is very little documentation describing how to implement it into a renderer.  Does anyone have some recommendations for how I should go about adding a shader system?
Here's a render from my ray tracer.  Took about 2 hours on my mediocre laptop.  This is before I added stratified sampling for reflected rays, so I could probably get the same quality render in less time now.
[IMG #1 Image]
[IMG #1]:
(L) [2014/02/26] [tby lion] [Implementing a Shader System] Wayback!Try this lessons: [LINK http://www.scratchapixel.com/lessons/3d-basic-lessons/lesson-15-introduction-to-shading-and-radiometry/]
Also try Intel ispc or OpenCL as alternative to raw SSE.
(L) [2014/02/27] [tby janzdott] [Implementing a Shader System] Wayback!I've read a lot of the material on that site.  It's been very useful.  By far the best source for Radiometry information.  But they don't go into details on implementation at all.
(L) [2014/02/27] [tby spectral] [Implementing a Shader System] Wayback!Hi,
There are a lot of way to handle materials, by example :
1) uber shader
You can just have one shader with a lot of parameters (see maxwell render by example)
Then you can create several layers, each one having its own set of parameters.
It is a good approach if you are able to create such material.
See ABg material... maybe ask to "dr_eck"...
2) multi-layer / multi-material
You can have a layered BRDF system (see Luxrender, pbrt, ...). You can have several BRDF that you put in layers
3) Network system
See Blender, it is very user friendly and very nice visually. But it can be more complex to handle (mainly on the GPU to keep efficiency)
4) Shading language
It is the most advanced approach, the problem is that it is more complex to integrate correctly into a GPU renderer, simply because compared to other systems
you have no "previous" knowledge about the probability of sampling some materials and then it can be more complex to handle with performance loose !
(Remember that there is no recursion etc... on the GPU)
if ( RANDOM_CONDITION() )
 Ci = diffuse();
else
 Ci = phong();
The problem is that you don't know if "RANDOM_CONDITION()" will be used or not (can always be false), then you can spend some time to evaluate to "layers" that you will not account !
Anyway, it is difficult to explain (not sure I'm clear) ...
Summary
In your place I will just go for a multi-layer / multi-material system like they do in PBRT/LuxRender and other system. It is flexible and user friendly. And very easy to handle and optimize even on the GPU [SMILEY ;-)]
Another approach is the Uber shader, the ABg approach of "dr_eck" (It is a user in the forum) is really powerfull and interesting... but I have never see it implemented. Anyway if you are interested I can
help you to find the PDF and all other sampling informations...
(L) [2014/02/27] [tby papaboo] [Implementing a Shader System] Wayback!Hey,
Welcome to the world of noise and pretty colors [SMILEY :)]
Physically Based Rendering is a great resource yes. Start there and then go digging for articles on anything you want to know more about.
As spectral said, the easiest is to base you material system on layered BSDFs. This allows you to describe your materials in a highlevel fashion, e.g. by combining a diffuse and glossy BRDF via a fresnel layer or create transmissive materials by wrapping a BSDF in a BRDFtoBTDF layer. The downside is that you can't create arbitrary ad-hog materials, but it provides a clean material interface that you can use for consistent rendering across several integrators. (nice for experimenting)
Depending on whether or not this renderer you're building is commercial/closed I would also recommend Mitsuba as a source for material and general inspiration.
As for CPU vs GPU it comes down to a lot of factors:
Flexibility: If you just want to play around with some prototypes and don't care too much about speed, then I'd go for a CPU raytracer. Interfaces make it really easy to combine BSDFs at runtime, but you pay the price with virtual lookups.
Speed: I don't think there's a clear winner here, but my point of view is that it takes less time to get something that's fast on the GPU as SIMT is a build in feature and not something you have to add afterwards as with multithreading and SIMD on the CPU. However, a good CPU raytracer will probably be faster than a GPU if you're sitting on a laptop. You could also go for an OpenCL implementation and distribute your work over both platforms.
Debugging: This is where the GPU raytracers basically suck (although NVIDIA would never admit it [SMILEY ;)] ). If one of your rays do something dumb, then they can crash your entire driver, making debugging a nightmare (plus I've run into a fair share of edgecases by now where nvcc just produces faulty programs from sane code). If you don't need a GPU raytracer, then debugability is the reason why I would stay on the CPU.
Memory: CPUs generally have more RAM available to them than GPUs, which could be important to you in the long run.
If you (still) want to try out a GPU ray tracer, then OptiX might be a good starting point. It allows you to get started relatively quickly, by letting you focus on intersection and shading and handling the acceleration structure for you behind the scenes.
If you want to look into an optimized CPU raytracer, then have a look at Intel's Embree. I haven't looked much into it myself yet, but they should have spend a lot of time optimizing it, so if you want speed then it should be a good place to start.
Happy shading.
(L) [2014/02/27] [tby lion] [Implementing a Shader System] Wayback!>> papaboo wrote:If one of your rays do something dumb, then they can crash your entire driver, making debugging a nightmare
Ha-ha, I face it many times =) I even fail to make Intel OpenCL debugging, breakpoints is set, but just do not works.
(L) [2014/02/27] [tby Dade] [Implementing a Shader System] Wayback!>> lion wrote:papaboo wrote:If one of your rays do something dumb, then they can crash your entire driver, making debugging a nightmare
Ha-ha, I face it many times =) I even fail to make Intel OpenCL debugging, breakpoints is set, but just do not works.
You should try to run GDB on Linux with AMD drivers, they can totally freeze your machine at any random point: it is like running blind on a mine field. It is an experience only for strong hearts  [SMILEY :lol:]
(L) [2014/02/27] [tby janzdott] [Implementing a Shader System] Wayback!I use Blender and I like how you can combine multiple materials with a blend factor.  To handle this, I would assume the blend factor is used as a probability, then each ray renders one of the materials randomly based on the probabilities?
I think I might just add a shader class then give it inputs and outputs and a virtual main function.  That would only be a temporary solution though.
I think I'm gonna stay away from using the GPU for now at least.  The performance is pretty acceptable on the CPU.  Its comparable in speed to Blender, which really surprises me since I haven't really put much effort into optimization yet.  I don't think adding materials/shaders will slow it down much.  I also added a window that displays the render and updates as each sample finishes.
My render above is incorrect.  I wasn't doing light absorption right, which is why you can see the white bleed onto the colored parts of the walls.  I also wasn't doing real anti aliasing either.  Each ray only contributed to its own pixel.  I've fixed both of those.
(L) [2014/02/28] [tby Dietger] [Implementing a Shader System] Wayback!>> janzdott wrote:I don't think adding materials/shaders will slow it down much.
Think again [SMILEY :D]
Unfortunately the times of 10% shading/ 90% traversal are over (if this was ever true to begin with). Traversal performance scales quite well with increased geometric complexity while shading performance does not scale all that well with increased material complexity. Importance sampling and evaluating a multi-layered material where every parameter can be driven by a combination of (procedural) textures can become quite expensive, and this is what you will get if you give such flexibility to an artist. Assuming your ray traversal is well optimized, you should expect the shading to take up a significant chunk of the overall render time.
Dietger
(L) [2014/03/06] [tby janzdott] [Implementing a Shader System] Wayback!Dietger, I won't be writing any procedural shaders myself.  But I would like it to be an option, which is why I'm adding a shader system.  I doubt the simple built-in shaders (diffuse, glossy, textures etc) that I'll be adding will slow down render times too much.
I've been looking into Open Shading Language some more.  I read the specification, which wasn't much help.  There aren't any resources that I'm aware of, so I'm looking at Blender's source code.  OSL uses something called a closure that somehow specifies radiance as a function.  The specification doesn't explain this well at all.  It basically says you can use the closure to determine which directions to sample.  I have no idea how this works.  I'll have to continue looking at Blender's source code to figure out how to determine sampling directions from a closure.
OSL does seem like a very good solution that would be fairly easy to implement.  I'm going to keep looking through Blender's source code until I get it figured out.  Until then, I'm open to other suggestions.
back