Visionaray Ray Tracing Framework back

Board: Home Board index Raytracing Visuals, Tools, Demos & Sources

(L) [2015/03/21] [tby szellmann] [Visionaray Ray Tracing Framework] Wayback!

Hi there,
I'd like to take this opportunity to advertise Visionaray, which is (yet another) ray tracing framework.
[LINK https://github.com/szellmann/visionaray]
[LINK https://github.com/szellmann/visionaray/wiki]
Visionaray is based on generic programming with C++. In contrast to other frameworks, its main goal is to achieve platform independence: write "kernels" in C++, have "schedulers" (CUDA, TBB, C++-Threads) execute kernels on the hardware you desire. Users write their own kernels, though the framework provides a few custom ones (primary rays only, whitted, path tracing, ...).
Visionaray is the preliminary result of a hobby project of mine that I mainly pursue in my spare time, and for a few weeks now partially during my work at the University of Cologne. Visionaray is open source (MIT license). At its current state, I consider it not yet mature enough to be any useful in a real project, but we (at this time it's basically a student of mine and me) are working on it.
Companion projects are concerned with ray tracing in VR (a plugin for the VR renderer OpenCOVER ([LINK https://github.com/hlrs-vis/covise]) comes with Visionaray) and ray tracing on FPGAs (we're working on a Xilinx Vivado HLS scheduler and porting code to fixed point [SMILEY :-)] ).
Best,
Stefan
(L) [2015/06/29] [tby szellmann] [Visionaray Ray Tracing Framework] Wayback!

I started to write a series of Visionaray tutorials on medium.com:
[LINK https://medium.com/tag/visionaray]
Check back for updates, there are going to be more tutorials soon!
(L) [2015/07/01] [tby papaboo] [Visionaray Ray Tracing Framework] Wayback!

Sounds really cool.
Slightly off-topic: In the integration with VR do you do any GI effects or is it Whitted only? If you do GI (which I of course hope [SMILEY :)] ) can you then share any thoughts on the subject? PT would be the obvious quick and dirty solution, but it produces so much noise that it can't work in all scenarios. Some form of baking would be better, but that can reduce interactivity with the scene.
(L) [2015/07/01] [tby szellmann] [Visionaray Ray Tracing Framework] Wayback!

It's so far only Whitted, but we're eager to get GI running in VR eventually.
Noise reduction (MIS) isn't there yet, but up on the TODO list! I haven't given BDPT on GPUs a real shot, but I surmise it's not a good match (for GPUs), because of the diverging data path of the light and eye paths.
The combination of the latter will of course not totally eliminate noise (as you said).
There's this video of the upcoming Brigade 3, and I believe they go in the cloud for this. We have an MPI scheduler in the works, and we're collaborating with the Computing Centre of the Univesity of Cologne. I'm eager to see how stupid, massive parallelization (sort-first, all nodes have the scene data) can help [SMILEY :)]
Then I've read about foveated ray tracing: [LINK http://research.lighttransport.com/foveated-real-time-ray-tracing-for-virtual-reality-headset/] - I hope that this may help to some degree when using a VR headset.
We're also investigating FPGA ray tracing. We are not yet experts. However, being able to customize the data path sounds really promising - the biggest problem with GI is diverging data paths, I believe. The learning curve with FPGAs is however quite steep [SMILEY :)]
Baking is only valid for diffuse reflection and static lights, am I right? Or would you call methods like photon mapping baking? The latter is something we consider to incorporate, but in general (because we do this mostly for "researchy" stuff [SMILEY :)] ) we'd rather prefer unbiased methods.
(L) [2015/07/02] [tby papaboo] [Visionaray Ray Tracing Framework] Wayback!

Thanks for feedback and the link to foveated ray tracing. Haven't read that yet, so now I have an excuse for sitting outside in the sun and do some reading. [SMILEY :)]
I would consider photon mapping a 'soft' baking approach, since there is a preprocess step, but it's really quick. A rebake is increadibly fast, but of course a first frame rendered using only PM lacks a lot of information and is either splotchy or incredibly biased. But otherwise yes, I think most realtime baking approaches only work well for diffuse surfaces and static lights, but I haven't dug to much into it, since so far my focus has been on methods that actually converges to the correct result.
You can do BDPT reasonably effective on the GPU actually by sharing light paths between rays in a warp. Take a look at Dietger van Antwerpens thesis. I have no clue though if it's fast enough for interactive purposes though. My guess would be 'no' and PM is probably preferable.
(L) [2015/09/26] [tby szellmann] [Visionaray Ray Tracing Framework] Wayback!

Visionaray is now also on facebook.
Visit [LINK https://www.facebook.com/visionaray]
(L) [2015/10/21] [tby szellmann] [Visionaray Ray Tracing Framework] Wayback!

Some more shameless self-promotion [SMILEY :)]
I wanted to post some progress from the virtual reality front. Here's some shots where you can see (with your red-cyan glasses [SMILEY :)] ) how distracting noise from not converged, naive path tracing really is in VR. Problem in VR is that the user moves his/her head all the time, so there's no opportunity for those images to converge. We're at the very beginning with this, so this is how it looks w/o any optimizations or further thought. Next in line is probably Multi-GPU / cluster parallelization to present e.g. ten or so blended frames at once. Sampling may also be improved, it's simple stratified sampling so far, LD would probably be better.
Here is the video (best viewed with "HD" activated!):
[LINK https://www.facebook.com/visionaray/videos/vb.1080239295327449/1094630350555010/?type=2&theater]
(L) [2016/03/17] [tby szellmann] [Visionaray Ray Tracing Framework] Wayback!

Take a look at the new Visionaray multi-volume rendering example:
[LINK https://youtu.be/aMRb3LJzgXs]
[LINK https://github.com/szellmann/visionaray/tree/master/src/examples/multi_volume]
[LINK https://github.com/szellmann/visionaray/wiki/More-examples]
The example program shows a more complex kernel - "SciVis" direct volume rendering with local illumination and correct compositing even if the datasets arbitrarily overlap. The example also demonstrates how to write Visionaray kernels that are compatible with x86 and CUDA.
(L) [2016/06/06] [tby szellmann] [Visionaray Ray Tracing Framework] Wayback!

Visionaray now supports multi-hit ray/object traversal. Check out the example program:
[LINK https://youtu.be/wv8ZkVoHtDw]
[LINK https://github.com/szellmann/visionaray/wiki/More-examples]
[LINK https://github.com/szellmann/visionaray/tree/master/src/examples/multi_hit]
The multi-hit feature is of course not restricted to alpha compositing (as is the example program). I'm currently working on a direct volume rendering application for medical imaging where I need to clip CT/MR data with some arbitrary opaque geometry. Going to use multi-hit to build up clip intervals that I will feed into my ray marcher afterwards.
(L) [2016/12/30] [ost by szellmann] [Visionaray Ray Tracing Framework] Wayback!

It's christmas holidays, and so I had a little fun compiling Visionaray for my new Raspberry PI 3: [LINK https://youtu.be/7bJTEmdlT2Y]

I haven't done anything special but applying some tiny compile fixes for this architecture, so there's no optimizations yet. I'm just porting the SIMD math lib to ARM NEON. Excited to see which performance difference SoA packets make for coherent workloads on the tiny CPU.

back