Finding Next-Gen – Part I – The Need For Robust (and Fast) Global Illumination in Games

This post is part of the series “Finding Next-Gen“. Original version on 2015/11/08. Liveblogging, because opinions evolve over time.

Global Illumination?

Global illumination (GI) is a family of algorithms used in computer graphics that simulate how light interacts and transfers between objects in a scene. With its roots in the Light Transport Theory (the mathematics behind energy, how it transfers between various media, and leads to visibility), GI takes into account both the light that comes directly from a light source (direct lighting/illumination), as well as how this light is reflected by and onto other surfaces (indirect lighting/illumination).

Figure 1: Direct and Indirect Illumination from a single directional light source. [1]

As seen in Figure 1, global illumination greatly increases the visual quality of a scene by providing a rich, organic and physically convincing simulation of light. Rather than solely depending on a manual (human) process to achieve the desired look, the mathematics behind GI allow lighting artists to create visually convincing scenes without having to worry about how they can manually replicate the complexity behind effects such as light scattering, color bleeding, or other visuals that are difficult to represent artistically using only direct illumination.

A Family of Algorithms? But I thought GI was “Bounce” or “Radiosity”

The single directional light source reflecting on the yellow wall, which then reflects on the adjacent floor and walls as well on Shrek‘s body, is an example of what one could first notice in Figure 2. This phenomenon is what video game artists would describe as bounce or radiosity [2]. Of course, there’s a myriad of things happening in the image below, such as ambient occlusioncaustics, subsurface scattering, but for simplicity let’s focus on a specific (and significantly noticeable) visual element that’s part of global illumination: diffuse inter-reflection.

BounceFigure 2: Diffuse Inter-Reflection, Light Bleeding, Ambient Occlusion and other GI effects [1]

In the context of video games, in order to create mood and visually pleasing complex imagery this element of indirect lighting is what lighting artists often wish for. While we should not forget that GI is more than just diffuse inter-reflection, in video games the term GI is often used to describe this diffuse ambient term. As we get closer to real-time, the use of the term GI will most likely evolve, but for now these terms might be used interchangeably in the field.

As seen above, the resulting light transport and color transfers unify spatially the objects in the scene, so that things look like they belong together. This visual feature is not always easy to achieve, given the various memory and performance limitations that one could have on a target platform.

But This Has Been Achieved Before…

Yup! And again depending on the limitations (static vs dynamic, memory, performance, fidelity) and what you can get away with in your game, indirect illumination has been achieved before in many ways. Here’s a sample of the various approaches:


  • Single ambient color
  • Precomputed, stored in geometry (ie.: vertex)
  • Precomputed, stored in a Light Map [3] [4]
  • Precomputed, stored in an Environment or Cube Map [5] [6] [7] [8] [9]
  • Precomputed, stored in a Volume [10] [11] [12]



  • Manually placed lights simulating indirect illumination
  • Cube Map Relighting [13]
  • Virtual Point Lights via Reflective Shadow Maps [14] [15]
  • Light Propagation Volumes [16] [17]
  • Image-space approaches [18] [19]
  • Voxel-based Dynamic GI [20]
  • Sparse Voxel Octree Global Illumination (SVOGI) [21]

Alternatively, as suggested by [28], they could be arranged by family:

  1. Spatial data structure tracing
  2. Environment map approximations 
  3. Offline prebaked

What’s Next?

Without a doubt, indirect illumination (and GI) improves the quality of a rendered scene, allowing artists to create visually coherent and rich images. GI provides a better understanding of 3D shapes and a better understanding of distances between objects. There’s a part of this process that simply can’t be mimicked by a human manually placing lights. GI also helps artists with their craft, permitting them to focus on other things that really matter and simplifying the process. In the end things just look better. 

Figure 3: A scene from Assassin’s Creed Unity, showcasing visually convincing lighting from sky and a directional light

Indirect illumination remains a complex problem, or rather, it is not a problem that has been completely solved in a real-time. We wouldn’t be talking about it if it was! :)

Thus far, indirect illumination has been generally too expensive (or “unjustified” over other rendering systems that require resources and processing power) to compute within the real-time constraints of a video game. As we move towards the third year of titles to be released on the 8th generation [22] of video games hardware, the need for Cinematic Image Quality and visually convincing Illumination [23] is a hot topic of discussion among the various experts in the field.

We are getting there. Some games have managed to use the previously mentioned techniques to create great visuals because, in the end, they found a technique that was good enough for their use. Still, our work is not yet done, and there are a few things we should take into account when thinking about how this problem could be solved. To begin, we can put our users first. Video games artists generally prefer to have:

  • Robustness over correctness
  • Simple over complex (interfaces)
  • Fast iteration over extensive computations

Sound familiar? It should. Researchers were given these exact parameters when they asked how to make their techniques relevant and applicable to the game industry [24].

Global illumination is inherently computationally expensive because it requires computing visibility between arbitrary points in a 3D scene, which is difficult with rasterization. Moreover, GI requires integrating lighting over a large number of directions. This is part of The Rendering Equation [32]:

RenderingEQFigure 4: The Rendering Equation [32]

As explained by Julian Fong, this is the rendering equation using 1000 commonly used English words. :)

Figure 5: The Rendering Equation, using only 1000 most used English words. [33]
Additionally in red, why GI is not simple

With today’s massive content and ever-growing worlds, the outlined elements in Figure 5 explain why GI is not simple. Further, depending on the kind of game one is making, the limitations of the previously mentioned techniques can be seen as a positive, or a negative. Knowing these limitations and where a technique breaks upfront is huge, necessary, and is something that every paper should fully expose [24].

For example, in the case of an outdoor open world game, the global illumination contribution might come from the sky because the sun is often much brighter than punctual light sources. However, this could change depending on whether the game happens during day or at night. The sun and sky do provide a significant source of lighting for a day-time game, but in the case of a night-time game the moon is actually a retro-reflective source of light, making its contributions less noticeable. In this case, punctual lights and other potential sources of lighting become more important. Another example would be a game that could get away with a few parallax-corrected cubemaps [25], both used for diffuse inter-reflection as well as glossy reflections, dynamically updated [30] or not, versus another where this might not be sufficient. Or, the lack of occlusion in Light Propagation Volumes might be fine for one game, but not for another. Finally, you need a metric ton of VPLs to make techniques that revolve around this technology viable, there are cases where this will not scale in a game.

Each technique’s advantages and inconveniences have to be taken into account.

What doesn’t change, though, is that we should always favor approaches that are robust, simple, and fast over correct, complicated, and extensive computations. Video game artists need techniques that can provide reliable, expected results, even if they are not perfect. Getting around technological limitations is what they do on a daily basis, and they have become experts at it. So if a technique is not perfect, but it’s robust, simple to use and fast to author, you will (generally) make people happy. Also, as I’ve mentioned before [26], by limiting the complexity of the interface one can often get pretty interesting visual results, if not better results. People get creative around limitations.

And really, bounce lighting doesn’t need to be accurate, it needs to be robust. It causes the scene to coalesce and fit, so without it things tend to look extremely artificial. [29]

Robustness vs Correctness?

Robustness of GI is relative to your particular scenario [29], but generally speaking:

  • It doesn’t matter if multiple-scattering is not fully supported
  • But it does matter that propagation doesn’t leak through walls
  • Dynamic GI can’t look like it lags behind by a few seconds, nor can it “spike” as it travels through a grid
  • Consistency over distance is also key, especially for large worlds

Simple vs Complicated?

  • Friendly ranges (0-1) are a must, but you can also let them go over the range as you’re exploring and later readjust
  • A better unification of parameters between offline rendering (cinema) and (game) static/pseudo-static/dynamic GI. The distance between offline and real-time is getting smaller, though there are still huge differences in the interface. This would help with knowledge sharing across industries [27]

Fast Iteration vs Extensive Computation?

  • People have been waiting for lightmaps to compute since forever, and, quite honestly, they are tired of waiting for it, even if distributed
  • Okay results fast, progressive improvement over time
  • Selective updates


In the end, the objective of this post is to convince you that robust global illumination is essential in helping your artists create rich, convincing, and beautiful visuals. You may be able to use this post to convince your artists that there are benefits to some automation, relinquishing a bit of control can lead to great things, and that these techniques are another bag of tools to help them create awesome visuals. My hope is that you will include your users fully & truly in the process, because at the end of the day it’s the only way to create techniques that are tailored to them. People might not agree with you at first, and you might have to insist that it’s for the better, but I’m sure you can find great examples out there to convince people.
Through evaluating existing techniques and cross-referencing them with your game’s needs, it is possible to both find techniques that work for you and push them as far as possible (in collaboration with your best artists, of course). Hopefully you will not be satisfied with what’s out there, and this will lead you to adapt and improve an existing technique to your game’s need, or create something new that you will share at a future conference! :) Whether fast means quick offline results or real-time, just remember that it’s all about what your game project would benefit from, and the need for robustness and simple interfaces is key.
That being said, we’re currently developing our own approach at WBGM, not perfect for every game (of course) but definitely tailored to our needs, which is also what your approach should be. We’re super excited about it, and hopefully we will be able to present it at a future conference. Our approach is being developed in tight collaboration with our lighting and technical artists, without whom the development wouldn’t have been possible. Stay tuned!

So, what does GI mean to you? Are you satisfied? What can we change? What can we do better? Looking forward to hearing your comments.

Thanks for reading! :)

Other Great Examples of GI

GIUnity2 GIUnity
Figure 6: Geomerics Enlighten in Unity (Unity Engine 5.2) [31] (Top Left/Right)
Fortnite (Bottom Left, Epic Games, Unreal Engine), Mirror’s Edge (DICE, Frostbite Engine)

Special Thanks

Jon Greenberg, Victor Ceitelis, Nicolas Lopez for the feedback and suggestions.

Sandra Jensen and Steve Hill for proofreading.


2015-11-08: Post!


[1] Tabellion, E. “Ray Tracing vs. Point-Based GI for Animated Films”. SIGGRAPH 2010 Course: “Global Illumination Across Industries”. Available Online.

[2] Cohen M., “Radiosity and Realistic Image Synthesis” Academic Press Professional, Cambridge, 1993.

[3] Abrash, M. “Quake’s Lighting Model: Surface Caching”, FlipCode Archives, 2000, Available Online.

[4] McTaggart, G. “Half-Life 2 Source Shading / Radiosity Normal Mapping”, 2004, Available Online.

[5] Greene, N. “Environment mapping and other applications of world projections”, IEEE Comput. Graph. Appl. 6, 11, 21-2. 1986

[6] Debevec, P., “Rendering Synthetic Objects into Real Scenes: Bridging Traditional and Image-Based Graphics with Global Illumination and High Dynamic Range Photography”, SIGGRAPH 1998.

[7] Brennan, C., “Diffuse Cube Mapping”, Direct3D ShaderX: Vertex and Pixel Shader Tips and Tricks, Wolfgang Engel, ed., Wordware Publishing, 2002, pp. 287-289.

[8] Devebec, P. et al. “HDRI and Image-based Lighting”, SIGGRAPH 2003. Available Online.

[9] Ramamoorthi, R., and Hanrahan, P., “An Efficient Representation for Irradiance Environment Maps” SIGGRAPH 2001, 497-500.

[10] Greger, G., Shirley, P., Hubbard, P., and Greenberg, D., “The Irradiance Volume”, IEEE Computer Graphics & Applications, 18(2):32-43, 1998.

[11] Sloan, P.-P. et al. “Precomputed Radiance Transfer for Real-Time Rendering in Dynamic, Low-Frequency Lighting Environments”, SIGGRAPH 2002.

[12] Tatarchuk, N. “Irradiance Volumes for Games”. GDC 2005. Available Online.

[13] McAuley, S., “Rendering The World of Farcry 4”, GDC 2015. Available Online.

[14] Dachsbacher, C., and Stamminger, M. “Reflective Shadow Maps”. In Proceedings of the ACM SIGGRAPH 2005 Symposium on Interactive 3D Graphics and Games, 203–213.

[15] Stewart, J. and Thomas, G., “Tiled Rendering Showdown: Forward ++ vs Deferred Rendering”, GDC 2013. Available Online.

[16] Kaplanyan, A. “Light Propagation Volumes in CryEngine 3”, SIGGRAPH 2009, Available Online.

[17] Kaplanyan, A. and Dachsbacher, C., “Cascaded light propagation volumes for real-time indirect illumination, SIGGRAPH 2010, Available Online.

[18] Ritschel, T. et Al., “SSDO: Approximating Dynamic Global Illumination in Image Space”. SIGGRAPH 2009, Available Online.

[19] Mara, M. et Al., “Fast Global Illumination Approximations on Deep G-Buffers”, 2014, Available Online.

[20] Doghramachi, H., “Rasterized Voxel-based Dynamic Global Illumination”, 2012, Available Online.

[21] Crassin, C. et al., “Interactive Indirect Illumination Using Voxel Cone Tracing”, Pacific Graphics 2011, Available Online.

[22] “History of video game consoles (eighth generation), Wikipedia.

[23] Andersson, J., “5 Major Challenges In Real-Time Rendering”, SIGGRAPH 2012, Available Online.

[24] Hecker, C., “A Game Developer’s Wish List for Researchers”, GDC 2011, Available Online.

[25] Lagarde, S., “Local Image-based Lighting With Parallax-corrected Cubemap”, SIGGRAPH 2012.

[26] Hill, S. et Al. “What Keeps You Up At Night”, ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games 2013,

[27] Offline discussions with Victor Ceitelis @ WB Games Montreal

[28] Offline discussions with Nicolas Lopez @ WB Games Montreal

[29] Offline discussions with Jon Greenberg @ WB Chicago / Netherrealm

[30] Colbert, M. “GPU-based Importance Sampling”, GPU Gems 3, Available Online.

[31] Mortensen, J., “Awesome Realtime GI on Desktops and Consoles”, Unity Technologies featuring Geomerics Enlighten. Available Online.

[32] Kajiya, J., “The Rendering Equation”, ACM Transactions, Volume 20,  Number 4, 1986, Available Online.

[33] Fong, J., “The rendering equation, using only 1000 most used English words (inspired by and ) “, On Twitter, 2015, Available Online.

Finding Next-Gen: Index

It’s Been Awhile…

It’s been a while since I posted something here. Life and being busy building a team and new graphics technology for a new game at WB Games Montréal is my excuse. But you’re right, it’s no excuse and I need to spend more time blogging. Hopefully this should get me to share more, which I definitely miss doing. So here’s my attempt. ;-)

This is the index page for a series of blog posts I’m currently writing about some challenges in real-time rendering and my perspective on these topics.

In no way is this an attempt to sum it all up or provide perfect solutions, but rather add to the discussion on topics that are close and resonate with me, while respecting my various NDAs. What you will find here is undeniably inspired and fueled by the various presentations and discussions from the latest conferences, as well as from various discussions where graphics programmers tend to hang out. The following wouldn’t be possible without this amazing community of developers that share on a daily basis – thanks to everyone for the inspiration and for always sharing your discoveries and opinions! Much needed for progress.

Also, this page will most likely evolve and change. Some topics might appear, be grouped, and some might greatly change pending on how much content I can put together. Feel free to come back and check  this page over time.

Please leave comments if need be, and thanks for reading!

Finding Next-Gen

Deformable Snow and DirectX 11 in Batman: Arkham Origins

Batman: Arkham Origins

It’s been a while, but I finally found some time for a quick post to regroup the presentations I’ve done this year at the Game Developers Conference (GDC) and NVIDIA’s GPU Technology Conference (GTC). These presentations showcase and explain some of the features developed for Batman: Arkham Origins.

You will find below a quick description of the talks, as well as a link to slides and accompanying video. The GTC presentation is an extended version of the GDC presentation, with additional features discussed as well as the integration of NVIDIA GameWorks.

Feel free to leave comments if you have any questions. :)

GDC 2014 – Deformable Snow Rendering in Batman: Arkham Origins

This talk presents a novel technique for the rendering of surfaces covered with fallen deformable snow featured in Batman: Arkham Origins. Scalable from current generation consoles to high-end PCs, as well as next-generation consoles, the technique allows for visually convincing and organically interactive deformable snow surfaces everywhere characters can stand/walk/fight/fall, is extremely fast, has a low memory footprint, and can be used extensively in an open world game. We will explain how this technique is novel in its approach of acquiring arbitrary deformation, as well as present all the details required for implementation. Moreover, we will share the results of our collaboration with NVIDIA, and how it allowed us to bring this technique to the next level on PC using DirectX 11 tessellation. Attendees will learn about a fast and low-memory footprint technique to render surfaces with deformable snow, which adds interaction between players and the world, depicts iconic and organic visuals of deformable snow, and is a good case for supporting tessellation in a DX11 game with minimal editing and art tweaks.

GDC 2014 – Deformable Snow Rendering in Batman: Arkham Origins (slides)
GDC 2014 – Deformable Snow Rendering in Batman: Arkham Origins (video)

GTC 2014 – DirectX 11 Rendering and NVIDIA GameWorks in Batman: Arkham Origins

This talk presents several rendering techniques behind Batman: Arkham Origins (BAO), the third installment in the critically-acclaimed Batman: Arkham series. This talk focuses on several DirectX 11 features developed in collaboration with NVIDIA specifically for the high-end PC enthusiast. Features such as tessellation and how it significantly improves the visuals behind Batman’s iconic cape and brings our deformable snow technique from the consoles to the next level on PC will be presented. Features such as physically-based particles with PhysX, particle fields with Turbulence, improved shadows, temporally stable dynamic ambient occlusion, bokeh depth-of-field and improved anti-aliasing will also be presented. Additionally, other improvements to image quality, visual fidelity and compression will be showcased, such as improved detail normal mapping via Reoriented Normal Mapping and how Chroma Subsampling at various stages of our lighting pipeline was essential in doubling the size of our open world and still fit on a single DVD.

GTC 2014 – DirectX 11 Rendering and NVIDIA GameWorks in Batman: Arkham Origins (slides)

Blending Normal Maps?

– What is the best way to blend two normal maps together?
– Why can’t I just add two normal maps together in Photoshop? I heard that to combine two normals together, you need to add the positive components and subtract the negative components, then renormalize. Looks right to me…
– Why shouldn’t I be using Overlay (or a series of Photoshop blend modes) to blend normal maps together? 

– I want to add detail to surfaces. How does one combine normal maps in real-time so that the detail normal map follows the topology described by the base normal map?

These are valid questions which always come back from one game project to another.

Seems like there are a lot of approaches out there which try to tackle normal map blending. Some do it better than others – they often are mathematically-sound and are also suitable for real-time use. One can also find many techniques which are purely adhoc, are often non-rigorous and have been unfortunately accepted by the game development art community as savoir-faire when it comes to normal map blending. :(

If this is something you’ve heard before, something you’ve asked yourself, check out this article, written together with Stephen Hill (@self_shadow) on the topic of blending normal maps. We go through various techniques that are out there, and present a neat alternative (“Reoriented Normal Mapping”). Our mathematically-based approach to normal map blending retains more detail and performs at a similar instruction cost to other existing techniques. We also provide code, and a real-time demo to compare all the techniques. This is by no means a complete analysis – particularly as we focus on detail mapping – so we might return to the subject at a later date and tie up some loose ends. In the meantime, we hope you find the article useful. Please let us know in the comments!

Blending in Detail

Approximating Translucency Revisited – With “Simplified” Spherical Gaussian Exponentiation


Lately, someone at work has pointed out the approximation of translucency Marc Bouchard and I developed back at EA [1], which ended up in DICE’s Frostbite engine [2] (aka The Battlefield 3 Engine). Wanting to know more, we started browsing the slides one by one and revisiting the technique. Looking at the HLSL, an optimization came to my mind, which I’ll end up discussing in this post. In case you missed the technique, here’s a few cool screenshots made by Marc, as well as tips & tricks regarding implementing the technique and generating the inverted ambient-occlusion/thickness map. See the references for additional links.


As mentioned in [2], the approximation of translucency is implemented as such:

// fLTDistortion = Translucency Distortion Scale Factor // fLTScale = Scale Factor // fLTThickness = Thickness (from a texture, per-vertex, or generated) // fLTPower = Power Factor
half3 vLTLight = vLight + vNormal * fLTDistortion;
half fLTDot = pow(saturate(dot(vEye, -vLTLight)), fLTPower) * fLTScale;
half3 fLT = fLightAttenuation * (fLTDot + fLTAmbient) * fLTThickness;
half3 cLT = cDiffuseAlbedo * cLightDiffuse * fLT;

In parallel, as mentioned in Christina Coffin’s talk on tiled-based deferred shading for PS3 [3], Matthew Jones (from Criterion Games) provided an optimization to computing a power function (or Exponentiation, i.e. for specular lighting) using a Spherical Gaussian approximation. This was also documented by Sébastien Lagarde [4][5]. By default, pow is roughly/generally implemented as such:

// Generalized Power Function
float pow(float x, float n)
    return exp(log(x) * n);

The Spherical Gaussian approximation replaces the log(x) and the exp(x) by an exp2(x). The specular power (n) is also scaled and biased by 1/ln(2):

// Spherical Gaussian Power Function float pow(float x, float n)
    n = n * 1.4427f + 1.4427f; // 1.4427f --> 1/ln(2)
    return exp2(x * n - n);

If possible, you should handle the scale and bias offline, or somewhere else. Additionally, if you have to compute the scale and bias at runtime, but don’t really care what actual number is passed as the exponent, a quick hack is to get rid of the scale and the bias all-together. While this is not something you necessarily want to do with physically-based BRDFs – where exponents are tweaked based on surface types – in the case you/artists are visually tweaking results (i.e. for ad hoc techniques, such as this approximation of translucency), this is totally fine. In our case, artists don’t care if the value is 8 or 12.9843 (8*1.4427+1.4427), they just want a specific visual response, and it saves ALU. Again, not to be used for all cases of pow(x, n), but you should try it with other techniques. You’d be surprised how much people won’t see a difference. :)

In the end, after injecting the “Simplified” Spherical Gaussian approximation in our translucency technique, we get:

// fLTDistortion = Translucency Distortion Scale Factor 
// fLTScale = Scale Factor 
// fLTThickness = Thickness (from a texture, per-vertex, or generated) 
// fLTPower = Power Factor
half3 vLTLight = vLight + vNormal * fLTDistortion;
half fLTDot = exp2(saturate(dot(vEye, -vLTLight)) * fLTPower - fLTPower) * fLTScale;
half3 fLT = fLightAttenuation * (fLTDot + fLTAmbient) * fLTThickness;
half3 cLT = cDiffuseAlbedo * cLightDiffuse * fLT;


[1] BARRÉ-BRISEBOIS, Colin and BOUCHARD, Marc. “Real-Time Approximation of Light Transport in Translucent Homogenous Media”, GPU Pro 2, Wolfgang Engel, Ed. Charles River Media, 2011.

[2] BARRÉ-BRISEBOIS, Colin and BOUCHARD, Marc. “Approximating Translucency for a Fast, Cheap and Convincing Subsurface Scattering Look”, GDC 2011, available online.

[3] COFFIN, Christina. “SPU-based Deferred Shading for Battlefield 3 on Playstation 3”, GDC 2011, available online.

[4] LAGARDE, Sébastien. “Adopting a physically based shading model”, Personal Blog, available online.

[5] LAGARDE, Sébastien. “Spherical gaussien approximation for Blinn-Phong, Phong and Fresnel”, available online.

A Taste of Live Code Editing With Visual Studio’s Tracepoints

Needless to say that from one software project to another, compile times vary greatly. When debugging we often spend a significant amount of time changing some lines of code, recompiling, waiting and then relaunching the software. While it is true that one has to change their debugging “style” from one project to another (i.e. having a thorough understanding of the problem before “poking around” code is definitely a plus when dealing with big code bases where edit-and-continue is not possible), waiting for the compiler and linker when debugging is never fun. It’s also very unproductive.

In parallel, some of the tools we use everyday allow us to greatly improve our debugging efficiency, though sometimes we are completely oblivious to the power that is available to us. Regardless of how fast or slow compile and link times are on your current project, any tools that can mitigate the recompile cycle when debugging are welcome. One such tool is Visual Studio’s tracepoint: a breakpoint with a custom action associated to it.

Many think tracepoints are only useful for outputting additional “on-the-fly” debugging information. Of course tracepoints can be used to add additional printf’s as you’re stepping inside code  (or quickly spinning inside loops and various functions), but they also have this amazing ability to “execute code”. Case in point, I can’t actually recall when this amazing feature was added, but it’s been news to people I’ve talked to about it…

As we can see in the (very useful :)) code above, two breakpoints and a tracepoint are set. To convert a breakpoint to a tracepoint, right-click then select the “When hit…” option:

The following window will open:

Microsoft has put up several pages explaining how one can use tracepoints to format and output information (using the special keywords mentioned above, but others as well). While it is true that you can use tracepoints for fetching the content of variables (and outputting them in the Output window), it is not clearly mentioned that tracepoints also allows you to programmatically modify variables, in real-time.

If we go back to the previous code example, let’s say while debugging we realize that the while-loop should stop after 100 iterations (rather than 512). Instead of editing the code, and recompiling, we can simply setup a tracepoint that will update the done variable.

By setting { done = (i == 100); } along with Continue execution, the done variable gets updated every time the loop iterates, with the applied conditional. The loop will then stop when the condition is fulfilled.

You can also concatenate several instructions on the same line. They simply have to be separated by curly braces:

i.e. { {done = (i == 100);} { object.x -= 1.0f; } { data[15] = 3; } }

While this is a very simple example, this also works well with arrays of simple types, as well as structures. Although the scope of what you can do is limited — for instance, you can’t go wild and call functions — the ability to quickly update variables and behaviour on the fly definitely adds to the debugging process. See how this can be used with your codebase and try using this feature the next time you debug code. I can guarantee you will enjoy the ability to temporarily change calculations and behavior without having to recompile!

Many thanks to Stephen Hill (Ubisoft) for reviewing this post.

Approximating Translucency – Part II (addendum to GDC 2011 talk / GPU Pro 2 article)


Thanks to everyone who attended my GDC talk! Was quite happy to see all those faces I hadn’t seen in a while, as well as meet those whom I only had contact with via Twitter, IM or e-mail.

For those who contacted me post-GDC, it seems the content I submitted for GPU Pro 2 didn’t make it into the final samples archive. I must’ve submitted too late, or it didn’t make it to the editor. Either way, the code in the paper is the most up-to-date, so you should definitely check-it out (and/or simply buy the book)!

Roger Cordes sent the following questions. I want to share the answers, since it covers most of the questions people had after the talk:

1. An “inward-facing” occlusion value is used as the local thickness of the object, a key component of this technique. You mention that you render a normal-inverted and color-inverted ambient occlusion texture to obtain this value. Can you give any further detail on how you render an ambient occlusion value with an inverted normal? I am attempting to use an off-the-shelf tool (xNormal) to render ambient occlusion from geometry that has had its normals inverted, and I don’t believe my results are consistent with the Hebe example you include in your article. A related question: is there a difference between rendering the “inward-facing” ambient occlusion versus rendering the normal (outward-facing) ambient occlusion and then inverting the result?

Yes, it is quite different actually. :)

If we take the cube (from the talk) as an example, when rendering the ambient occlusion with the original normals, you basically end up with white for all sides, since there’s nothing really occluding each face (or, approximately, the hemisphere of each face, oriented by the normal of that face). Now, in the case you flip the surface normals, you will end up with a different value at each vertex, because the inner faces that meet at each vertex create occlusion between each other. Basically, by flipping the normals during the AO computation, you are averaging “occlusion” inside the object. Now, the cube is not the best example for this, because it’s pretty uniform, and the demo with the cubes at GDC didn’t really have a map on each object.

In the case of Hebe, it’s quite different:

You can see on the image above how the nose is “whiter” than the other “thicker” parts of the head. And this is the key behind the technique: this map represents local variation of thickness on an object. If we transpose the previous statement for the head example, this basically means that the polygons inside the head generate occlusion between each other: the closer they are to each other, the more occlusion we get. If we get a lot of occlusion, we know the faces are close to each other, so this means that this area of the object is generally “thin”.

Inverting the AO computation doesn’t give the same result, because we’re not comparing the same thing, since the surface is “flipped” and faces are now oriented towards the inside of the mesh.

If we compare this with regular AO, it’s really not the same thing, and we can clearly see why:

Even color-inverted, you can see that the results are not the same. The occlusion on the original mesh happens with it outside hull, where as the normal-inversion gets occlusion from the inside hull, which are totally different results (and, in a way, totally different shapes).

2. In the GDC talk, some examples were shown of the results from using a full RGB color in the local thickness map (e.g. for human skin). The full-color local thickness map itself was not shown, though. I’m curious how these assets are authored, and wonder if you have any examples you might share?

For this demo, we basically just generated the inverted AO map, and multiplied it by a nice uniform and saturated red (the same kind of red you can see when you put your finger in front of a light). It can be more complex, and artists can also paint the various colors they want in order to get better results (i.e. in this case, some shades of orange and red, rather than a uniform color). Nonetheless, we felt it was good enough for this example. With proper tweaking, I’m sure you can make it even look better! :)

To illustrate how we generated the translucency/thickness maps for the objects in the talk, here’s another example with everyone’s favorite teapot:

We used 3D Studio MAX. To generate inverted normals, we use the Normal modifier and select Flip normals:

Make sure your mesh is properly UV-unwrapped. We then use the Render to Texture with Mental Ray to generate the ambient occlusion map for the normal-inverted object:

Notice how the Bright and Dark are inverted (compared to the default values). Here are some general rules/tips for generating a proper translucency/thickness map:

  • Make sure to select Ambient Occlusion when adding the object as output, and the proper UV channel
  • Set the Dark color to the maximum value you want for translucency (white = max, black = min)
  • Set the Bright color to the minimum value you want for translucency (ie.: we use 50% gray, because we want minimal translucency everywhere on the object)
  • Play with the values in the yellow rectangle to get the results you want
  • Max Dist is the distance traveled by the rays. If you have a big object, make it bigger. The opposite for a small object.
  • Increase the number of samples and the resolution once you have what you roughly want, for improved visuals (ie.: less noise)

That is all for now. Hope this helps! :)


Get every new post delivered to your Inbox.

Join 2,464 other followers