Channeling Your Inner Light

An attempt at more blogging, but this happened in the meantime, which is why you might find some of tweets below to be from a few months ago. 😉

A topic of discussion that comes up every now and then between programmers, technical artists and lighting artists is the concept of light masking, or Lighting Channels, and whether this concept is still valid. I’ve had this discussion many times before with developers out there (and somehow I’m sure you have too). Artists and programmers alike, opinions diverge. To get a new sample on the matter I decided to ask the twitter-verse:

LightChannels0
Light Channels – Yay or Ney (Twitter Poll)

duel1

Yup, a division! Before we go over the discussion and the many answers people provided, which I will mix/interleave throughout this post to give perspective, let’s first cover some ground and make sure we all talk about the same thing.

Light(ing) Channels?

In layman’s terms, Lighting Channels (LC) is the functionality of masking lights on an per-object basis, or on a subset of objects that meet the masking criteria. Environment-only, character-only, and cinematic-only lights to name a few are examples that come to mind.

LightChannels3
Lighting Channels in UDK  – Point light affecting Dynamic objects tagged as Cinematic 1 [1]

This inclusion/exclusion concept allows lighting artists to have more fine-grained, manual control on light interactions. The image above describes a light affecting dynamic objects such as characters and others, but not the static environment. This case is especially common for cut-scenes, where lighting artists can clearly identify the key, fill and rim lights for each character, for each shot, to ensure that the art-directed lighting is manicured and behaves as expected.


3-point light setup example from a TV show

Lighting channels are not limited to characters and cut-scenes. Other classic examples come to mind, such as additional lights to manually enhance/fixup global illumination (ie.: faking/adding custom diffuse inter-reflection, or “bounce”), or additional lights for animated/hero objects.

Light Channels vs Light Merging

A bit of a side topic, though a concept often intertwined with light channels is light merging. In the context of forward lighting and how it was done back then, prior to tiled or clustered approaches [2] [3], iterating on all potentially affecting lights on a per object basis would greatly affect performance, especially on older hardware. To palliate this issue, dynamically lit objects were often lit by a subset of lights present in the scene: a select number of closest lights, or dynamically merging/coalescing lights [1] based on brightness/luminous flux, distance, or even using spherical harmonics [4] [5] to merge and extract the most relevant n-point (often 3). Lights could also be merged by taking their affecting channels into account.

What’s the connection with light channels you say? Well, more than just about being merged based on their channels, it turns out that while merging lights can provide a “good approximation” in certain scenarios of light interactions with dynamic objects without having to compute all interactions, you still end up with a discrepancy between lights that have been manually placed by artists and lights that have been merged for dynamic objects. To compensate, having additional “forced/non-merged” lights was often requested by lighting artists. Unfortunately this often led to too many of these lights, and we were back at square one with regards to the performance benefits of lighting with merged lights. Basically, a performance-affecting visual workaround to fix a performance work-around. It’s getting complicated…

A Workaround For Something Broken?

At this point you probably feel like something’s odd, not working, or simply that light channels are a hack for something broken in the way we light scenes. And you’d be right to think so. To put things in perspective, though we/programmers work really hard to constantly improve their representation and behavior, real-time lights in video games don’t generally behave the way they’re intuitively expected to. Of the many discrepancies, the following stand out:

  1. Shadows are commonly missing from many lights
    1. Only a few select (key) lights get shadows, not all of them.
    2. Shadowless lights shine through walls, and can hit unintended targets.
  2. Lights don’t (all) trigger inter-reflections / indirect illumination
    1. The lack of proper GI on all dynamic lights means only direct illumination.
    2. If your engine has real-time GI, most likely limited to a few lights.

The lack of good GI / inter-reflection / indirect illumination causes artists to want to manually add fixup secondary/fill lights to artificially simulate such effects, for both environments and characters, separately but sometimes simultaneously handling both cases. In practice this can work, but can easily become a mess of lights, unless you are very strict and handle these in separate layers. And even if you are organized, since most of these lights will not cast shadows, often causes things to now get hit with artificial lights that shouldn’t.

lc2 (1).png

One common example is the Fridge Mouth Effect, the glowing of characters’ mouths from fill lights that are intended to light characters faces, or enhance the lighting on the environment, but aren’t shadowed and end up lighting up the inside of characters’ mouths. A visual artifact also featured on ears with translucency and no self-shadowing.

5.png
Fridge Mouth Effect – Non-shadow casting fill light coming from the right, during a randomly positioned cutscene in Fallout 4

Artists then want to isolate where these fixups happen. They want to work around the shadowing and GI limitations by controlling where the light ends up. This is light channels come in, but also other exotic modifications to physically-plausible light attenuation, such as custom falloff curves. The latter is up for another discussion. 😉

Missing Shadows Feels Like a Big Deal. Is this it?

If we had shadows on everything, feels like most of these issues would be non-issues. At the end of the day, it’s also about fighting priorities: artist total control vs practical AAA production realities:

area1.png

Using flags to enforce rules to compensate for the lack of light/shadow behavior makes some sense if it weren’t the fact that 1) it doesn’t work well with deferred, 2) creates lighting discrepancies, 3) significantly increases scene management complexity for both art & code, and 4) breaks global illumination. In this day and age, with the sheer number of available dynamic lights, heavy usage of real-time & static shadow caching atlases, and more game teams working at solving real-time GI properly, feels like asking for light channels is a matter of convenience for an approach that used to work on previous generation titles, a consequence of getting used to the previous era lighting systems.

LC2.png

But I Really Want/Need To Make This Work With Deferred…

LightChannels4Simple G-Buffer, with Lighting Channel (LC)

In the case of deferred, dedicating a full channel for storing a bitmask is probably not what you want to do. If you really want to make this work, instead you can store a subset of essential lighting channels with a few bits “borrowed” from other channels. Nonetheless, you will have to figure out how this interacts with your forward path, your particles, your more complex multi-layer environments/scenes, and if it’s worth the hassle.

So, What’s The Conclusion?

Taking a step back and looking at how we use our tools, figuring out what works and what doesn’t with technology and rethinking our workflows is part of being game developers, and applies to all professions in the field. Never being satisfied and always looking at improving how we achieve better is necessary to our success, individually but also as an industry. Such an industry-wide challenge happened not too long ago with the first generation of games that showcased PBR. I recall discussions with friends who pioneered this at various studios, and it wasn’t easy to get everyone on board, until the industry saw the true value in the long term investment, and embraced the change. Now, it’s hard to go back. 😉

In the case of lighting approaches & tools, while we still have some major challenges to tackle with shadows and global illumination, maybe it’s also time to take a leap of faith, and think about the long term value of moving away from some old concepts such as light channels. That being said, I invite you to have this discussion with the various rendering programmers, lighting artists and technical artists at your studio, and get perspective on your game needs and figure out what needs to happen to get everyone on board with a solution that works for everybody, keep the conversations going, and blog about what worked for your project.

It’s not necessarily about the conclusion, but rather about the discussion. Looking forward to hearing about it, and how your investment in shadows and unified lighting & GI solutions has paid of in the long run. 😉

lc1

Addendum – Another Perspective From The Movie Industry

lightchannels5

Addendum – New York Times

I was asked by the New York Times if I could do a shorter version of this article, for a tech column:

LightChannelsNYT.jpgIn case you haven’t seen the original article. A few might find this amusing 😉

Thanks

Thanks to everyone who provided feedback by responding to the Twitter poll (Bart Wronski, Steve Anichini, Sébastien Lagarde, Paul Greveson, Don Williamson, Stephen Hill, and Jordan Walker), and especially Jon Greenberg and Nicolas Lopez for the additional feedback and conversations. Was nice to have both artists and programmers express their views. Let’s keep the conversations going, super important for our industry!

References

[1] Unreal Developer Kit (UDK), “Light Environments”, Online.

[2] Harada, Takahiro, “Forward+: Bringing Deferred Lighting To The Next Level”, EUROGRAPHIC  2012. Online.

[3] Olsson, Ola. “Clustered Deferred and Forward Shading”, HPG 2012, Online.

[4] Greenberg, Jon. “Hitting 60Hz in Unreal Engine”, GDC 2009. Online.

[5] Greenberg, Jon. “Dynamic Lighting in Mortal Kombat vs DC Universe”, 2012, Online

Moving On

As some of you know, a few weeks ago I left WB Games Montréal. I’ve accepted an amazing, life-changing opportunity at DICE’s Frostbite Labs to work on cutting-edge, industry-leading future graphics technologies, jaw-dropping creative experiences and other unannounced fantastic topics, with my good friend and graphics luminary Johan Andersson and a super-team of unequivocally talented engineers (Graham Wihlidal, Niklas Nummelin, …) and amazing artists.

This was not an easy decision to make. Moving to another country and leaving the (two) teams I’ve built, coworkers and friends at WBGM, and moving away from friends and family is a big deal. Building a new studio and team is not always easy, and I learned a lot. I’m grateful for the undeniable trust that was put in me, for all we’ve accomplished and have yet to show the world, and for having the opportunity to initiate & expand WB Games’ worldwide tech sharing with my friend Jon Greenberg at Netherrealm, Jared Harp and Piotr Mintus at Monolith, and everyone else who contributed. I’m convinced that you will be amazed with what is yet to come from WBGM, and from the other WB studios. 😉

The beginning of my adventure in Stockholm is coming up quickly. More on this soon! 😉

dicejohan

Finding Next-Gen – Part I – The Need For Robust (and Fast) Global Illumination in Games

Figure 1: Direct and Indirect Illumination from a single directional light source. [1]

This post is part of the series “Finding Next-Gen“. Original version on 2015/11/08. Liveblogging, because opinions evolve over time.

Global Illumination?

Global illumination (GI) is a family of algorithms used in computer graphics that simulate how light interacts and transfers between objects in a scene. With its roots in the Light Transport Theory (the mathematics behind energy, how it transfers between various media, and leads to visibility), GI takes into account both the light that comes directly from a light source (direct lighting/illumination), as well as how this light is reflected by and onto other surfaces (indirect lighting/illumination).

As seen in Figure 1, global illumination greatly increases the visual quality of a scene by providing a rich, organic and physically convincing simulation of light. Rather than solely depending on a manual (human) process to achieve the desired look, the mathematics behind GI allow lighting artists to create visually convincing scenes without having to worry about how they can manually replicate the complexity behind effects such as light scattering, color bleeding, or other visuals that are difficult to represent artistically using only direct illumination.

Continue reading “Finding Next-Gen – Part I – The Need For Robust (and Fast) Global Illumination in Games”

Finding Next-Gen: Index

It’s Been Awhile…

This is the index page for a series of blog posts I’m currently writing about some challenges in real-time rendering and my perspective on these topics.

In no way is this an attempt to sum it all up or provide perfect solutions, but rather add to the discussion on topics that are close and resonate with me, while respecting my various NDAs. What you will find here is undeniably inspired and fueled by the various presentations and discussions from the latest conferences, as well as from various discussions where graphics programmers tend to hang out. The following wouldn’t be possible without this amazing community of developers that share on a daily basis – thanks to everyone for the inspiration and for always sharing your discoveries and opinions! Much needed for progress.

Also, this page will most likely evolve and change. Some topics might appear, be grouped, and some might greatly change pending on how much content I can put together. Feel free to come back and check  this page over time.

Please leave comments if need be, and thanks for reading!

Finding Next-Gen

Deformable Snow and DirectX 11 in Batman: Arkham Origins

It’s been a while, but I finally found some time for a quick post to regroup the presentations I’ve done this year at the Game Developers Conference (GDC) and NVIDIA’s GPU Technology Conference (GTC). These presentations showcase and explain some of the features developed for Batman: Arkham Origins.

Continue reading “Deformable Snow and DirectX 11 in Batman: Arkham Origins”

Blending Normal Maps?

What is the best way to blend two normal maps together? Why can’t I just add two normal maps together in Photoshop? I heard that to combine two normals together, you need to add the positive components and subtract the negative components, then renormalize. Looks right to me… Why shouldn’t I be using Overlay (or a series of Photoshop blend modes) to blend normal maps together? I want to add detail to surfaces. How does one combine normal maps in real-time so that the detail normal map follows the topology described by the base normal map?

If this is something you’ve heard before, something you’ve asked yourself, check out this article, written together with Stephen Hill (@self_shadow) on the topic of blending normal maps.

Continue reading “Blending Normal Maps?”

Approximating Translucency Revisited – With “Simplified” Spherical Gaussian Exponentiation

Lately, someone at work has pointed out the approximation of translucency Marc Bouchard and I developed back at EA [1], which ended up in DICE’s Frostbite engine [2] (aka The Battlefield 3 Engine). Wanting to know more, we started browsing the slides one by one and revisiting the technique. Looking at the HLSL, an optimization came to my mind, which I’ll end up discussing in this post. In case you missed the technique, here’s a few cool screenshots made by Marc, as well as tips & tricks regarding implementing the technique and generating the inverted ambient-occlusion/thickness map. See the references for additional links.

Continue reading “Approximating Translucency Revisited – With “Simplified” Spherical Gaussian Exponentiation”

A Taste of Live Code Editing With Visual Studio’s Tracepoints

Needless to say that from one software project to another, compile times vary greatly. When debugging we often spend a significant amount of time changing some lines of code, recompiling, waiting and then relaunching the software. While it is true that one has to change their debugging “style” from one project to another (i.e. having a thorough understanding of the problem before “poking around” code is definitely a plus when dealing with big code bases where edit-and-continue is not possible), waiting for the compiler and linker when debugging is never fun. It’s also very unproductive.

In parallel, some of the tools we use everyday allow us to greatly improve our debugging efficiency, though sometimes we are completely oblivious to the power that is available to us. Regardless of how fast or slow compile and link times are on your current project, any tools that can mitigate the recompile cycle when debugging are welcome. One such tool is Visual Studio’s tracepoint: a breakpoint with a custom action associated to it.

Continue reading “A Taste of Live Code Editing With Visual Studio’s Tracepoints”

Approximating Translucency – Part II (addendum to GDC 2011 talk / GPU Pro 2 article)

Thanks to everyone who attended my GDC talk! Was quite happy to see all those faces I hadn’t seen in a while, as well as meet those whom I only had contact with via Twitter, IM or e-mail.

For those who contacted me post-GDC, it seems the content I submitted for GPU Pro 2 didn’t make it into the final samples archive. I must’ve submitted too late, or it didn’t make it to the editor. Either way, the code in the paper is the most up-to-date, so you should definitely check-it out (and/or simply buy the book)!

Roger Cordes sent the following questions. I want to share the answers, since it covers most of the questions people had after the talk:

Continue reading “Approximating Translucency – Part II (addendum to GDC 2011 talk / GPU Pro 2 article)”

GDC 2011 – Approximating Translucency for a Fast, Cheap and Convincing Subsurface Scattering Look

As presented at GDC 2011, here’s my (and the legendary Marc Bouchard) talk on our real-time approximation of translucency, featured in the Frostbite 2 engine (used for DICE’s Battlefield 3). These are the slides that we presented, along with audio. Enjoy! 🙂

 

 

Marc and I would like to thank the following people for their time, reviews and constant support:

For those we managed to meet, we had such a good time with all of you at GDC. Always happy to interact with passionate game developers – this is what makes our industry so great! We hope to see you soon again! 🙂