Channeling Your Inner Light

An attempt at more blogging, but this happened in the meantime, which is why you might find some of tweets below to be from a few months ago. 😉

A topic of discussion that comes up every now and then between programmers, technical artists and lighting artists is the concept of light masking, or Lighting Channels, and whether this concept is still valid. I’ve had this discussion many times before with developers out there (and somehow I’m sure you have too). Artists and programmers alike, opinions diverge. To get a new sample on the matter I decided to ask the twitter-verse:

LightChannels0
Light Channels – Yay or Ney (Twitter Poll)

duel1

Yup, a division! Before we go over the discussion and the many answers people provided, which I will mix/interleave throughout this post to give perspective, let’s first cover some ground and make sure we all talk about the same thing.

Light(ing) Channels?

In layman’s terms, Lighting Channels (LC) is the functionality of masking lights on an per-object basis, or on a subset of objects that meet the masking criteria. Environment-only, character-only, and cinematic-only lights to name a few are examples that come to mind.

LightChannels3
Lighting Channels in UDK  – Point light affecting Dynamic objects tagged as Cinematic 1 [1]

This inclusion/exclusion concept allows lighting artists to have more fine-grained, manual control on light interactions. The image above describes a light affecting dynamic objects such as characters and others, but not the static environment. This case is especially common for cut-scenes, where lighting artists can clearly identify the key, fill and rim lights for each character, for each shot, to ensure that the art-directed lighting is manicured and behaves as expected.


3-point light setup example from a TV show

Lighting channels are not limited to characters and cut-scenes. Other classic examples come to mind, such as additional lights to manually enhance/fixup global illumination (ie.: faking/adding custom diffuse inter-reflection, or “bounce”), or additional lights for animated/hero objects.

Light Channels vs Light Merging

A bit of a side topic, though a concept often intertwined with light channels is light merging. In the context of forward lighting and how it was done back then, prior to tiled or clustered approaches [2] [3], iterating on all potentially affecting lights on a per object basis would greatly affect performance, especially on older hardware. To palliate this issue, dynamically lit objects were often lit by a subset of lights present in the scene: a select number of closest lights, or dynamically merging/coalescing lights [1] based on brightness/luminous flux, distance, or even using spherical harmonics [4] [5] to merge and extract the most relevant n-point (often 3). Lights could also be merged by taking their affecting channels into account.

What’s the connection with light channels you say? Well, more than just about being merged based on their channels, it turns out that while merging lights can provide a “good approximation” in certain scenarios of light interactions with dynamic objects without having to compute all interactions, you still end up with a discrepancy between lights that have been manually placed by artists and lights that have been merged for dynamic objects. To compensate, having additional “forced/non-merged” lights was often requested by lighting artists. Unfortunately this often led to too many of these lights, and we were back at square one with regards to the performance benefits of lighting with merged lights. Basically, a performance-affecting visual workaround to fix a performance work-around. It’s getting complicated…

A Workaround For Something Broken?

At this point you probably feel like something’s odd, not working, or simply that light channels are a hack for something broken in the way we light scenes. And you’d be right to think so. To put things in perspective, though we/programmers work really hard to constantly improve their representation and behavior, real-time lights in video games don’t generally behave the way they’re intuitively expected to. Of the many discrepancies, the following stand out:

  1. Shadows are commonly missing from many lights
    1. Only a few select (key) lights get shadows, not all of them.
    2. Shadowless lights shine through walls, and can hit unintended targets.
  2. Lights don’t (all) trigger inter-reflections / indirect illumination
    1. The lack of proper GI on all dynamic lights means only direct illumination.
    2. If your engine has real-time GI, most likely limited to a few lights.

The lack of good GI / inter-reflection / indirect illumination causes artists to want to manually add fixup secondary/fill lights to artificially simulate such effects, for both environments and characters, separately but sometimes simultaneously handling both cases. In practice this can work, but can easily become a mess of lights, unless you are very strict and handle these in separate layers. And even if you are organized, since most of these lights will not cast shadows, often causes things to now get hit with artificial lights that shouldn’t.

lc2 (1).png

One common example is the Fridge Mouth Effect, the glowing of characters’ mouths from fill lights that are intended to light characters faces, or enhance the lighting on the environment, but aren’t shadowed and end up lighting up the inside of characters’ mouths. A visual artifact also featured on ears with translucency and no self-shadowing.

5.png
Fridge Mouth Effect – Non-shadow casting fill light coming from the right, during a randomly positioned cutscene in Fallout 4

Artists then want to isolate where these fixups happen. They want to work around the shadowing and GI limitations by controlling where the light ends up. This is light channels come in, but also other exotic modifications to physically-plausible light attenuation, such as custom falloff curves. The latter is up for another discussion. 😉

Missing Shadows Feels Like a Big Deal. Is this it?

If we had shadows on everything, feels like most of these issues would be non-issues. At the end of the day, it’s also about fighting priorities: artist total control vs practical AAA production realities:

area1.png

Using flags to enforce rules to compensate for the lack of light/shadow behavior makes some sense if it weren’t the fact that 1) it doesn’t work well with deferred, 2) creates lighting discrepancies, 3) significantly increases scene management complexity for both art & code, and 4) breaks global illumination. In this day and age, with the sheer number of available dynamic lights, heavy usage of real-time & static shadow caching atlases, and more game teams working at solving real-time GI properly, feels like asking for light channels is a matter of convenience for an approach that used to work on previous generation titles, a consequence of getting used to the previous era lighting systems.

LC2.png

But I Really Want/Need To Make This Work With Deferred…

LightChannels4Simple G-Buffer, with Lighting Channel (LC)

In the case of deferred, dedicating a full channel for storing a bitmask is probably not what you want to do. If you really want to make this work, instead you can store a subset of essential lighting channels with a few bits “borrowed” from other channels. Nonetheless, you will have to figure out how this interacts with your forward path, your particles, your more complex multi-layer environments/scenes, and if it’s worth the hassle.

So, What’s The Conclusion?

Taking a step back and looking at how we use our tools, figuring out what works and what doesn’t with technology and rethinking our workflows is part of being game developers, and applies to all professions in the field. Never being satisfied and always looking at improving how we achieve better is necessary to our success, individually but also as an industry. Such an industry-wide challenge happened not too long ago with the first generation of games that showcased PBR. I recall discussions with friends who pioneered this at various studios, and it wasn’t easy to get everyone on board, until the industry saw the true value in the long term investment, and embraced the change. Now, it’s hard to go back. 😉

In the case of lighting approaches & tools, while we still have some major challenges to tackle with shadows and global illumination, maybe it’s also time to take a leap of faith, and think about the long term value of moving away from some old concepts such as light channels. That being said, I invite you to have this discussion with the various rendering programmers, lighting artists and technical artists at your studio, and get perspective on your game needs and figure out what needs to happen to get everyone on board with a solution that works for everybody, keep the conversations going, and blog about what worked for your project.

It’s not necessarily about the conclusion, but rather about the discussion. Looking forward to hearing about it, and how your investment in shadows and unified lighting & GI solutions has paid of in the long run. 😉

lc1

Addendum – Another Perspective From The Movie Industry

lightchannels5

Addendum – New York Times

I was asked by the New York Times if I could do a shorter version of this article, for a tech column:

LightChannelsNYT.jpgIn case you haven’t seen the original article. A few might find this amusing 😉

Thanks

Thanks to everyone who provided feedback by responding to the Twitter poll (Bart Wronski, Steve Anichini, Sébastien Lagarde, Paul Greveson, Don Williamson, Stephen Hill, and Jordan Walker), and especially Jon Greenberg and Nicolas Lopez for the additional feedback and conversations. Was nice to have both artists and programmers express their views. Let’s keep the conversations going, super important for our industry!

References

[1] Unreal Developer Kit (UDK), “Light Environments”, Online.

[2] Harada, Takahiro, “Forward+: Bringing Deferred Lighting To The Next Level”, EUROGRAPHIC  2012. Online.

[3] Olsson, Ola. “Clustered Deferred and Forward Shading”, HPG 2012, Online.

[4] Greenberg, Jon. “Hitting 60Hz in Unreal Engine”, GDC 2009. Online.

[5] Greenberg, Jon. “Dynamic Lighting in Mortal Kombat vs DC Universe”, 2012, Online

Finding Next-Gen – Part I – The Need For Robust (and Fast) Global Illumination in Games

Figure 1: Direct and Indirect Illumination from a single directional light source. [1]

This post is part of the series “Finding Next-Gen“. Original version on 2015/11/08. Liveblogging, because opinions evolve over time.

Global Illumination?

Global illumination (GI) is a family of algorithms used in computer graphics that simulate how light interacts and transfers between objects in a scene. With its roots in the Light Transport Theory (the mathematics behind energy, how it transfers between various media, and leads to visibility), GI takes into account both the light that comes directly from a light source (direct lighting/illumination), as well as how this light is reflected by and onto other surfaces (indirect lighting/illumination).

As seen in Figure 1, global illumination greatly increases the visual quality of a scene by providing a rich, organic and physically convincing simulation of light. Rather than solely depending on a manual (human) process to achieve the desired look, the mathematics behind GI allow lighting artists to create visually convincing scenes without having to worry about how they can manually replicate the complexity behind effects such as light scattering, color bleeding, or other visuals that are difficult to represent artistically using only direct illumination.

Continue reading “Finding Next-Gen – Part I – The Need For Robust (and Fast) Global Illumination in Games”

Finding Next-Gen: Index

It’s Been Awhile…

This is the index page for a series of blog posts I’m currently writing about some challenges in real-time rendering and my perspective on these topics.

In no way is this an attempt to sum it all up or provide perfect solutions, but rather add to the discussion on topics that are close and resonate with me, while respecting my various NDAs. What you will find here is undeniably inspired and fueled by the various presentations and discussions from the latest conferences, as well as from various discussions where graphics programmers tend to hang out. The following wouldn’t be possible without this amazing community of developers that share on a daily basis – thanks to everyone for the inspiration and for always sharing your discoveries and opinions! Much needed for progress.

Also, this page will most likely evolve and change. Some topics might appear, be grouped, and some might greatly change pending on how much content I can put together. Feel free to come back and check  this page over time.

Please leave comments if need be, and thanks for reading!

Finding Next-Gen

Deformable Snow and DirectX 11 in Batman: Arkham Origins

It’s been a while, but I finally found some time for a quick post to regroup the presentations I’ve done this year at the Game Developers Conference (GDC) and NVIDIA’s GPU Technology Conference (GTC). These presentations showcase and explain some of the features developed for Batman: Arkham Origins.

Continue reading “Deformable Snow and DirectX 11 in Batman: Arkham Origins”

Blending Normal Maps?

What is the best way to blend two normal maps together? Why can’t I just add two normal maps together in Photoshop? I heard that to combine two normals together, you need to add the positive components and subtract the negative components, then renormalize. Looks right to me… Why shouldn’t I be using Overlay (or a series of Photoshop blend modes) to blend normal maps together? I want to add detail to surfaces. How does one combine normal maps in real-time so that the detail normal map follows the topology described by the base normal map?

If this is something you’ve heard before, something you’ve asked yourself, check out this article, written together with Stephen Hill (@self_shadow) on the topic of blending normal maps.

Continue reading “Blending Normal Maps?”

Approximating Translucency – Part II (addendum to GDC 2011 talk / GPU Pro 2 article)

Thanks to everyone who attended my GDC talk! Was quite happy to see all those faces I hadn’t seen in a while, as well as meet those whom I only had contact with via Twitter, IM or e-mail.

For those who contacted me post-GDC, it seems the content I submitted for GPU Pro 2 didn’t make it into the final samples archive. I must’ve submitted too late, or it didn’t make it to the editor. Either way, the code in the paper is the most up-to-date, so you should definitely check-it out (and/or simply buy the book)!

Roger Cordes sent the following questions. I want to share the answers, since it covers most of the questions people had after the talk:

Continue reading “Approximating Translucency – Part II (addendum to GDC 2011 talk / GPU Pro 2 article)”

GDC 2011 – Approximating Translucency for a Fast, Cheap and Convincing Subsurface Scattering Look

As presented at GDC 2011, here’s my (and the legendary Marc Bouchard) talk on our real-time approximation of translucency, featured in the Frostbite 2 engine (used for DICE’s Battlefield 3). These are the slides that we presented, along with audio. Enjoy! 🙂

 

 

Marc and I would like to thank the following people for their time, reviews and constant support:

For those we managed to meet, we had such a good time with all of you at GDC. Always happy to interact with passionate game developers – this is what makes our industry so great! We hope to see you soon again! 🙂

GDC 2011 Talks You Should Attend

As seen in the previous post, I’ll be presenting at GDC 2011. We also have several AMAZING speakers from EA (Electronic Arts) whose talk you should attend:

SPU-based Deferred Shading in BATTLEFIELD 3 for Playstation 3

[Speaker]

Christina Coffin (DICE), @ChristinaCoffin

[Description]

This session presents a detailed programmer oriented overview of our SPU based shading system implemented in DICE’s Frostbite 2 engine and how it enables more visually rich environments in BATTLEFIELD 3 and better performance over traditional GPU-only based renderers. We explain in detail how our SPU Tile-based deferred shading system is implemented, and how it supports rich material variety, High Dynamic Range Lighting, and large amounts of light sources of different types through an extensive set of culling, occlusion and optimization techniques.

[Takeaway]

Attendees will learn how SPU based shading allows a rich variety in materials, more complex lighting and enables offloading of traditional GPU work over to SPUs. Optimization techniques used to minimize SPU processing time for various scenarios will also be taught. Attendees will understand how to technically design, balance and analyze the performance of a game environment that uses an SPU based shading system. Attendees will learn key points of creating and optimizing code and data processing for high throughput shading on SPUs.

[Intended Audience]

This session is intended for advanced programmers with an understanding of current forward and deferred rendering techniques, as well as console development experience. Knowledge of lower level programming in vector intrinsic, assembly language, and structure-of-arrays versus array-of-structures data processing is recommended.

[Links]

http://schedule.gdconf.com/session/12273

Lighting You Up in BATTLEFIELD 3

[Speaker]

Kenny Magnusson (DICE)

[Description]

This session presents a detailed overview of the new lighting system implemented in DICEs Frostbite 2 engine and how it enables us to stretch the boundaries of lighting in BATTLEFIELD 3 with its highly dynamic, varied and destructible environments. BATTLEFIELD 3 goes beyond the lighting limitations found in our previous battlefield games, while avoiding costly and static prebaked lighting without compromising quality. We discuss the technical implementation of the art direction in BATTLEFIELD 3, the workflows we created for it as well as how all the individual lighting components fit together: deferred rendering, HDR, dynamic radiosity and particle lighting.

[Takeaway]

Attendees will learn the workflow we use to light our worlds, as well as memory and performance considerations to hit our performance budgets from a technical art perspective. Attendees will also get a thorough insight into an exciting new approach to lighting both open landscapes and indoor environments with dynamic radiosity in a fully destructible world.

[Intended Audience]

Attendees should understand the fundamentals of lighting systems used in contemporary game development as well as basic principles of rendering technology. Primarily directed at technical artist and rendering programmers, the presentation is accessible enough that anyone attending will gain an insight into the world of lighting.

[Links]

http://schedule.gdconf.com/session/12139

Advanced Visual Effects with DirectX 11

[Speakers]

Johan Andersson (DICE, @repi), Evan Hart (NVIDIA), Richard Huddy (AMD), Nicolas Thibieroz (AMD), Cem Cebenoyan (NVIDIA), Jon Story (AMD), John McDonald (NVIDIA Corporation), Jon Jansen (NVIDIA Corp), Holger Grn (AMD), Takahiro Harada (Havok) and Nathan Hoobler (NVIDIA)

[Description]

Brought to you with the collaboration of the industry’s leading hardware and software vendors, this day-long tutorial provides an in-depth look at the Direct3D technologies in DirectX 11 and how they can be applied to cutting-edge PC game graphics for GPUs and APUs. This year we focus exclusively on DirectX 11, examining a variety of special effects which illustrate its use in real game content. This will include detailed presentations from AMD and NVIDIAs demo and developer support teams as well as some of the top game developers who ship real games into the marketplace. In addition to illustrating the details of rendering advanced real-time visual effects, this tutorial will cover a series of vendor-neutral optimizations that developers need to keep in mind when designing their engines and shaders.

[Takeaway]

Attendees will gain greater insights into advanced utilization of the Direct3D 11 graphics API as used in popular shipping titles.

[Intended Audience]

The intended audience for this session is a graphics programmer who is planning or actively developing a Direct3D 11 application.

[Link]

http://schedule.gdconf.com/session/12078

Culling the Battlefield: Data Oriented Design in Practice

[Speaker]

Daniel Collin (DICE), @daniel_collin

[Description]

This talk will highlight the evolution of the object culling system used in the Frostbite engine over the years and why we decide to rewrite a system for BATTLEFIELD 3 that had worked well for 4 shipping titles. The new culling system is developed using a data oriented design that favors simple data layouts which enables very efficient computation using pipelined vector instructions. Concrete examples of how code is developed with this approach and the implications and benefits compared to traditional tree-based systems will be given.

[Takeaway]

Attendees will learn how to apply data oriented design in practice to write simple but high throughput code that works well on all platforms. This is especially important for the current consoles.

[Intended Audience]

Intended for programmers on all levels but some background on vector math and basic threading would be beneficial.

[Link]

http://schedule.gdconf.com/session/12251

Four Guns West

[Speakers]

Ben Minto (DICE), Chuck Russom (Chuck Russom FX), Jeffrey Wesevich (38 Studios), Chris Sweetman (Splash Damage Ltd.), and Charles Maynes (Freelance)

[Description]

This session aims to give an insight into the shadowy world of audio in AAA FPS titles. Featuring the sound designers behind MEDAL OF HONOR, BRINK, BLACK, HBO’s THE PACIFIC, and CALL OF DUTY. The face off is split into bite size chunks concentrating on key areas that are required to design the weapon audio for a AAA shooter. Areas of focus will include insight into Weapons Field Recording headed up by Charles Maynes, Sound Design with Chuck Russom, Creating Believable Worlds and Mixing Practices with Ben Minto, and Real vs Hyper Real with Chris Sweetman. The panel will also discuss the emotional power of weapon sound design in Video Games & Film.

[Takeaway]

New attendees will get tips and tactics on approaching audio in an FPS which can then be applied to their own productions. It will empower producers and game designers to consider audio early in a titles development which will increase the player’s experience and enjoyment tenfold.

[Intended Audience]

Target audience will be sound designers,producers, game designers and creatives from all aspects of video games wanting insight into the tricks behind great sounding AAA titles. The session will be structured to allow for all levels of knowledge in the specific fields.

[Link]

http://schedule.gdconf.com/session/12109

GDC 2011 – Approximating Translucency for a Fast, Cheap and Convincing Subsurface Scattering Look

This year, I’ll be presenting at GDC (Game Developers Conference), along other great speakers from EA (especially DICE).

The talk is about a very cheap and fast approximation of translucency that will allow developers to add convincing subsurface scattering to their scenes with minimal impact on performance. The technique is excellent in a wide variety of scenes, using anything from minimal to massive numbers of lights. Here’s a quick summary of my talk, which you can also find on the GDC website.

[Title]

Approximating translucency for a Fast, Cheap, and convincing Subsurface Scattering Look

[Description]

In real-time computer graphics, the interaction of light and matter is often reduced to local reflection described by Bidirectional Reflectance Distribution Functions (BRDFs). While this mathematical model is valid for describing surface reflectance of opaque objects, many objects in nature are partly translucent: light travels within the surface. To simulate translucent properties of objects in real-time, such as subsurface scattering (in human skin and other surfaces), developers rely on complex and expensive techniques. Conversely, this talk presents a fast and scalable approximation of translucency for a convincing subsurface scattering look which can be implemented on current and next generation video gaming systems.

[Takeaway]

Developers attending this session will be able to improve their game’s visuals by adding real-time translucency to their scenes with minimal impact on the run-time, as demonstrated using EA DICE’s Frostbite engine. Moreover, this effect, once limited to offline rendering, will undeniably help developers in creating a more complete and immersive gaming experience.

[Intended Audience]

Reaching stakeholders from several disciplines of video game development, this talk is intended for all individuals that share common goals in terms of real-time graphics and that strive towards improving the visual quality of tomorrow’s games: rendering programmers, technical artists, art directors and technical art directors.

Visit this website for more info on other great talks to be presented.

See you at GDC!