Gathering Feedback: Open Problems in Real-Time Raytracing

For HPG 2018‘s keynote (co-located / two days before SIGGRAPH 2018) I’ll be discussing some of the latest advances in game raytracing, but most notably some of the open problems.

RT.png

With DXR making raytracing more accessible, and bringing us one step closer to “real-time raytracing for the masses”, the gap between offline and real-time is significantly getting smaller. To that, tailoring some of the existing offline raytracing approaches to real-time doesn’t happen overnight, can’t be done 1:1 nor free of compromises, as many of you saw in our GDC/DigitalDragons PICA PICA presentations. Existing offline approaches are definitely not free of problems, as raytracing literature and algorithms have originally be designed with offline in mind.

HPG is a great forum for discussing these sort of things since lots of folks in research are definitely interested in what DXR can enable for their research, want to know what problems we are trying to solve, and how their research can be adopted by the games industry.

That said, I would appreciate any feedback from fellow developers & researchers about what you think are the most important open problems in real-time raytracing. Already have a few, but definitely interested in hearing your thoughts on the matter.

Feel free to answer here, tweet at me, or privately. Additionally, if you’re around Vancouver for SIGGRAPH you should consider attending HPG. Schedule is shaping up to be pretty awesome! 🙂

So, what’s your #1 open problem with real-time raytracing?

2 thoughts on “Gathering Feedback: Open Problems in Real-Time Raytracing

  1. Hi Colin, the number one problem to me seems to be generating or updating acceleration structures of skinned meshes so they can be raytraced along with all the other stuff in a scene.

    One option would be to raytrace all the static, non-deforming objects and rasterize the skinned meshes separately. That might be more efficient than raytracing skinned meshes, but then you also have to render all the transparent stuff separately. And if you wanted to do things rasterizing can’t do well (reflections, global illumination), that would be very difficult. You would have to go back to doing all the clever trickery that has become standard the last 20 years with rasterizing (cube maps/probes and so on).

    So the challenge is to come up with 1) a deformable acceleration structure (is that even possible?) or 2) a really efficient way to generate a good enough BVH.

    Another related problem is that you take a ‘generic’ piece of data (the model in standard T-pose) and by animating the model make it unique for each instance of that model. And that unique per-instance data probably needs to stay in memory for a while, for primary rays, shadow rays, reflection rays, path tracing, etc. It’s not fire and forget like the vertex shader with rasterizing. And there’s not an infinite amount of memory… although it might be enough in practice?

  2. Hello,
    My #1 open problem for realtime ray-tracing is a better accelaration structure. There are already some pretty good ones out there, but the ones that are fast are also slow at constructing the structure(so not good for realtime). Using some of the rays to help accelarate the others might help(especially when firing anti-aliasing rays, but that is a bad idea anyway).
    P. S. I have never understood why secondary rays are fired randomly when trying to get a noise free image.
    P.P.S. Nice job on some of the articles. 🙂

Leave a comment