November 6, 2015 2 Comments
This post is part of the series “Finding Next-Gen“. Original version on 2015/11/08. Liveblogging, because opinions evolve over time.
Global illumination (GI) is a family of algorithms used in computer graphics that simulate how light interacts and transfers between objects in a scene. With its roots in the Light Transport Theory (the mathematics behind energy, how it transfers between various media, and leads to visibility), GI takes into account both the light that comes directly from a light source (direct lighting/illumination), as well as how this light is reflected by and onto other surfaces (indirect lighting/illumination).
Figure 1: Direct and Indirect Illumination from a single directional light source. 
As seen in Figure 1, global illumination greatly increases the visual quality of a scene by providing a rich, organic and physically convincing simulation of light. Rather than solely depending on a manual (human) process to achieve the desired look, the mathematics behind GI allow lighting artists to create visually convincing scenes without having to worry about how they can manually replicate the complexity behind effects such as light scattering, color bleeding, or other visuals that are difficult to represent artistically using only direct illumination.
A Family of Algorithms? But I thought GI was “Bounce” or “Radiosity”
The single directional light source reflecting on the yellow wall, which then reflects on the adjacent floor and walls as well on Shrek‘s body, is an example of what one could first notice in Figure 2. This phenomenon is what video game artists would describe as bounce or radiosity . Of course, there’s a myriad of things happening in the image below, such as ambient occlusion, caustics, subsurface scattering, but for simplicity let’s focus on a specific (and significantly noticeable) visual element that’s part of global illumination: diffuse inter-reflection.
Figure 2: Diffuse Inter-Reflection, Light Bleeding, Ambient Occlusion and other GI effects 
In the context of video games, in order to create mood and visually pleasing complex imagery this element of indirect lighting is what lighting artists often wish for. While we should not forget that GI is more than just diffuse inter-reflection, in video games the term GI is often used to describe this diffuse ambient term. As we get closer to real-time, the use of the term GI will most likely evolve, but for now these terms might be used interchangeably in the field.
As seen above, the resulting light transport and color transfers unify spatially the objects in the scene, so that things look like they belong together. This visual feature is not always easy to achieve, given the various memory and performance limitations that one could have on a target platform.
But This Has Been Achieved Before…
Yup! And again depending on the limitations (static vs dynamic, memory, performance, fidelity) and what you can get away with in your game, indirect illumination has been achieved before in many ways. Here’s a sample of the various approaches:
- Single ambient color
- Precomputed, stored in geometry (ie.: vertex)
- Precomputed, stored in a Light Map  
- Precomputed, stored in an Environment or Cube Map     
- Precomputed, stored in a Volume   
- Geomerics Enlighten 
- Manually placed lights simulating indirect illumination
- Cube Map Relighting 
- Virtual Point Lights via Reflective Shadow Maps  
- Light Propagation Volumes  
- Image-space approaches  
- Voxel-based Dynamic GI 
- Sparse Voxel Octree Global Illumination (SVOGI) 
Alternatively, as suggested by , they could be arranged by family:
- Spatial data structure tracing
- Environment map approximations
- Offline prebaked
Without a doubt, indirect illumination (and GI) improves the quality of a rendered scene, allowing artists to create visually coherent and rich images. GI provides a better understanding of 3D shapes and a better understanding of distances between objects. There’s a part of this process that simply can’t be mimicked by a human manually placing lights. GI also helps artists with their craft, permitting them to focus on other things that really matter and simplifying the process. In the end things just look better.
Figure 3: A scene from Assassin’s Creed Unity, showcasing visually convincing lighting from sky and a directional light
Indirect illumination remains a complex problem, or rather, it is not a problem that has been completely solved in a real-time. We wouldn’t be talking about it if it was! :)
Thus far, indirect illumination has been generally too expensive (or “unjustified” over other rendering systems that require resources and processing power) to compute within the real-time constraints of a video game. As we move towards the third year of titles to be released on the 8th generation  of video games hardware, the need for Cinematic Image Quality and visually convincing Illumination  is a hot topic of discussion among the various experts in the field.
We are getting there. Some games have managed to use the previously mentioned techniques to create great visuals because, in the end, they found a technique that was good enough for their use. Still, our work is not yet done, and there are a few things we should take into account when thinking about how this problem could be solved. To begin, we can put our users first. Video games artists generally prefer to have:
- Robustness over correctness
- Simple over complex (interfaces)
- Fast iteration over extensive computations
Sound familiar? It should. Researchers were given these exact parameters when they asked how to make their techniques relevant and applicable to the game industry .
Global illumination is inherently computationally expensive because it requires computing visibility between arbitrary points in a 3D scene, which is difficult with rasterization. Moreover, GI requires integrating lighting over a large number of directions. This is part of The Rendering Equation :
As explained by Julian Fong, this is the rendering equation using 1000 commonly used English words. :)
With today’s massive content and ever-growing worlds, the outlined elements in Figure 5 explain why GI is not simple. Further, depending on the kind of game one is making, the limitations of the previously mentioned techniques can be seen as a positive, or a negative. Knowing these limitations and where a technique breaks upfront is huge, necessary, and is something that every paper should fully expose .
For example, in the case of an outdoor open world game, the global illumination contribution might come from the sky because the sun is often much brighter than punctual light sources. However, this could change depending on whether the game happens during day or at night. The sun and sky do provide a significant source of lighting for a day-time game, but in the case of a night-time game the moon is actually a retro-reflective source of light, making its contributions less noticeable. In this case, punctual lights and other potential sources of lighting become more important. Another example would be a game that could get away with a few parallax-corrected cubemaps , both used for diffuse inter-reflection as well as glossy reflections, dynamically updated  or not, versus another where this might not be sufficient. Or, the lack of occlusion in Light Propagation Volumes might be fine for one game, but not for another. Finally, you need a metric ton of VPLs to make techniques that revolve around this technology viable, there are cases where this will not scale in a game.
Each technique’s advantages and inconveniences have to be taken into account.
What doesn’t change, though, is that we should always favor approaches that are robust, simple, and fast over correct, complicated, and extensive computations. Video game artists need techniques that can provide reliable, expected results, even if they are not perfect. Getting around technological limitations is what they do on a daily basis, and they have become experts at it. So if a technique is not perfect, but it’s robust, simple to use and fast to author, you will (generally) make people happy. Also, as I’ve mentioned before , by limiting the complexity of the interface one can often get pretty interesting visual results, if not better results. People get creative around limitations.
And really, bounce lighting doesn’t need to be accurate, it needs to be robust. It causes the scene to coalesce and fit, so without it things tend to look extremely artificial. 
Robustness vs Correctness?
Robustness of GI is relative to your particular scenario , but generally speaking:
- It doesn’t matter if multiple-scattering is not fully supported
- But it does matter that propagation doesn’t leak through walls
- Dynamic GI can’t look like it lags behind by a few seconds, nor can it “spike” as it travels through a grid
- Consistency over distance is also key, especially for large worlds
Simple vs Complicated?
- Friendly ranges (0-1) are a must, but you can also let them go over the range as you’re exploring and later readjust
- A better unification of parameters between offline rendering (cinema) and (game) static/pseudo-static/dynamic GI. The distance between offline and real-time is getting smaller, though there are still huge differences in the interface. This would help with knowledge sharing across industries 
Fast Iteration vs Extensive Computation?
- People have been waiting for lightmaps to compute since forever, and, quite honestly, they are tired of waiting for it, even if distributed
- Okay results fast, progressive improvement over time
- Selective updates
So, what does GI mean to you? Are you satisfied? What can we change? What can we do better? Looking forward to hearing your comments.
Thanks for reading! :)
Other Great Examples of GI
Jon Greenberg, Victor Ceitelis, Nicolas Lopez for the feedback and suggestions.
Sandra Jensen and Steve Hill for proofreading.
 Tabellion, E. “Ray Tracing vs. Point-Based GI for Animated Films”. SIGGRAPH 2010 Course: “Global Illumination Across Industries”. Available Online.
 Cohen M., “Radiosity and Realistic Image Synthesis” Academic Press Professional, Cambridge, 1993.
 Abrash, M. “Quake’s Lighting Model: Surface Caching”, FlipCode Archives, 2000, Available Online.
 McTaggart, G. “Half-Life 2 Source Shading / Radiosity Normal Mapping”, 2004, Available Online.
 Greene, N. “Environment mapping and other applications of world projections”, IEEE Comput. Graph. Appl. 6, 11, 21-2. 1986
 Debevec, P., “Rendering Synthetic Objects into Real Scenes: Bridging Traditional and Image-Based Graphics with Global Illumination and High Dynamic Range Photography”, SIGGRAPH 1998.
 Brennan, C., “Diffuse Cube Mapping”, Direct3D ShaderX: Vertex and Pixel Shader Tips and Tricks, Wolfgang Engel, ed., Wordware Publishing, 2002, pp. 287-289.
 Devebec, P. et al. “HDRI and Image-based Lighting”, SIGGRAPH 2003. Available Online.
 Ramamoorthi, R., and Hanrahan, P., “An Efficient Representation for Irradiance Environment Maps” SIGGRAPH 2001, 497-500.
 Greger, G., Shirley, P., Hubbard, P., and Greenberg, D., “The Irradiance Volume”, IEEE Computer Graphics & Applications, 18(2):32-43, 1998.
 Sloan, P.-P. et al. “Precomputed Radiance Transfer for Real-Time Rendering in Dynamic, Low-Frequency Lighting Environments”, SIGGRAPH 2002.
 Tatarchuk, N. “Irradiance Volumes for Games”. GDC 2005. Available Online.
 McAuley, S., “Rendering The World of Farcry 4”, GDC 2015. Available Online.
 Dachsbacher, C., and Stamminger, M. “Reflective Shadow Maps”. In Proceedings of the ACM SIGGRAPH 2005 Symposium on Interactive 3D Graphics and Games, 203–213.
 Stewart, J. and Thomas, G., “Tiled Rendering Showdown: Forward ++ vs Deferred Rendering”, GDC 2013. Available Online.
 Kaplanyan, A. “Light Propagation Volumes in CryEngine 3”, SIGGRAPH 2009, Available Online.
 Kaplanyan, A. and Dachsbacher, C., “Cascaded light propagation volumes for real-time indirect illumination, SIGGRAPH 2010, Available Online.
 Ritschel, T. et Al., “SSDO: Approximating Dynamic Global Illumination in Image Space”. SIGGRAPH 2009, Available Online.
 Mara, M. et Al., “Fast Global Illumination Approximations on Deep G-Buffers”, 2014, Available Online.
 Doghramachi, H., “Rasterized Voxel-based Dynamic Global Illumination”, 2012, Available Online.
 Crassin, C. et al., “Interactive Indirect Illumination Using Voxel Cone Tracing”, Pacific Graphics 2011, Available Online.
 “History of video game consoles (eighth generation), Wikipedia.
 Andersson, J., “5 Major Challenges In Real-Time Rendering”, SIGGRAPH 2012, Available Online.
 Hecker, C., “A Game Developer’s Wish List for Researchers”, GDC 2011, Available Online.
 Lagarde, S., “Local Image-based Lighting With Parallax-corrected Cubemap”, SIGGRAPH 2012.
 Hill, S. et Al. “What Keeps You Up At Night”, ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games 2013, http://www.csee.umbc.edu/csee/research/vangogh/I3D2013/.
 Offline discussions with Victor Ceitelis @ WB Games Montreal
 Offline discussions with Nicolas Lopez @ WB Games Montreal
 Offline discussions with Jon Greenberg @ WB Chicago / Netherrealm
 Colbert, M. “GPU-based Importance Sampling”, GPU Gems 3, Available Online.
 Mortensen, J., “Awesome Realtime GI on Desktops and Consoles”, Unity Technologies featuring Geomerics Enlighten. Available Online.
 Kajiya, J., “The Rendering Equation”, ACM Transactions, Volume 20, Number 4, 1986, Available Online.