Board:
Home
Board index
Raytracing
General Development
(L) [2018/03/22] [ost
by jbikker] [DXR] Wayback!I've been reading some materials and watching videos on DXR over the past couple of days. Some observations:
- The good looking demos use 3 or 4 Titan V's.
- In all cases, it's rasterization as a basis with ray tracing effects added.
- All motion is rigid, i.e. no acceleration structure reconstruction, just a top-level BVH.
- Distribution effects are supported.
Looks like the tech relies heavily on scenes that 'play nice'. The SEED demo is rigid motion only and requires 3 Titans. It does however calculate AO and single-bounce diffuse indirect light, as well as glossy reflections. The Futuremark demo has more complex animation, but the comparison shots of 'with / without ray tracing' indicate that the dynamic character is reflected using screen space reflections; the character itself reflects static geometry and rigid motion. The Northlight demo is pure static geometry.
Any other insights or corrections?
(L) [2018/03/22] [ost
by szellmann] [DXR] Wayback!Here: [LINK https://www.youtube.com/watch?v=yaK1e59-oAA] it suggests that RT is currently a software feature, but that future GPUs will have fixed-function hardware. ("..the next generation of AMD and Nvidia will have hardware to speed this up.."). This is the most important takeaway IMO.
(L) [2018/03/22] [ost
by mpeterson] [DXR] Wayback!doing this now > 20 years all this ads seems to be odd (at least for me).
every time i look into the details of impl. its boring crap and never optimized for the used hw.
so the good old times are over since years...
(L) [2018/03/23] [ost
by cignox1] [DXR] Wayback!I'm very happy with all these news: after all, would have ever changed the market without a widely adopted api supporting RT in a incremental way? I don't think so. I hope that the next few years will show the same pattern we've seen with vertex/fragment shaders. It was 20 years ago or so that I've read a document on the web where some expert was explaining why gouraud shading (man, don't even remember the last time I've used the word gouraud) was here to stay because no only 1 sqrt per pixel was infeasible (so no phong) but also there was no real reason to change.
And yet, we have changed.
I'm excited to hear that developers can now add some RT on the top of their rasterization via a standard api and that GPUs will be supporting it more and more in the future with special hw. Today we emulate FFP with shaders. In the future we will emulate rasterization via RT  [SMILEY :D]
(L) [2018/03/24] [ost
by martin] [DXR] Wayback!Here's a quick intro into the API if you don't have time to digest the full Microsoft spec: [LINK https://devblogs.nvidia.com/introduction-nvidia-rtx-directx-raytracing]  (disclaimer: author here).
Most demos show only rigid motion, but deformation works well too. Here's an example with skinned characters: [LINK https://www.youtube.com/watch?time_continue=1&v=tjf-1BxpR9c] (sorry for the music [SMILEY ;)] )
In the demos that used multiple GPUs, we don't actually get a full 3-4x scaling, just because of the way the engines distribute their work (it's all explicitly app-controlled in DX12). So yes while we threw as much hardware at it as we could for the GDC runs, perf on a single GPU is only 2-2.5x slower than what you see in the videos.  And some demos (Remedy, Futuremark) don't use multi-GPU at all.
(L) [2018/03/24] [ost
by jbikker] [DXR] Wayback!Hi Martin, thanks for your reply!
Re: your reply: "Deformation" as in "refitting"? How many triangles can you rebuild per frame, and is this done on the GPU? And could you say something about the ray throughput of the system for average scenes (~100k tris or so)?
(L) [2018/03/26] [ost
by koiava] [DXR] Wayback!Hello world!!
It is really interesting Demo. I liked that a lot! I know that it's running on super expensive hardware, not suited for average gamer but still very interesting. As I guess main reason why it uses Volta GPUs is denoising. also seams like it has separate networks for shadow/glossy reflection denoising. So as a conclusion for me most impressive part of this demo is denoising and not ray tracing.
I'm also skeptical about rasterization part, it calculates direct visibility with rasterization and then does ray tracing, but I guess it need good occlusion culling to avoid ray tracing of invisible shading points because that might kill all the performance benefits you get from rasterization.
back