Hexagonal Bokeh Blur Revisited – Part 1: Basic 3-pass Version

This post is from a multi-part series titled “Hexagonal Bokeh Blur Revisited“. Want to jump directly to the index? If so, click here.

The code below demonstrates the most straightforward way to achieve this blur. It is done in 3 passes.

Animation

Step 0 – Blur Function

First, let’s define our blur function. This will be reused along the way.

float4 BlurTexture(sampler2D tex, float2 uv, float2 direction)
{
    float4 finalColor = 0.0f;
    float blurAmount = 0.0f;
 
    // This offset is important. Will explain later. ;)
    uv += direction * 0.5f;
 
    for (int i = 0; i < NUM_SAMPLES; ++i)
    {
        float4 color = tex2D(tex, uv + direction * i);
        color *= color.a;
        blurAmount += color.a; 
        finalColor += color;
    }
 
    return (finalColor / blurAmount);
}

Step 1 – Vertical Blur

First, we blur vertically.

Combined1

// Get the local CoC to determine the radius of the blur.
float coc = tex2D(sceneTexture, uv).a; 

// CoC-weighted vertical blur.
float2 blurDirection = coc * invViewDims * float2(cos(PI/2), sin(PI/2));
float3 color = BlurTexture(sceneTexture, uv, blurDirection) * coc;

// Done!
return float4(color, coc);

Step 2 – Diagonal Blur

Second we blur diagonally.

This stage is similar to Stage 1, but now with a 30 degree (PI/6) angle. We also combine the diagonal blur with the vertical blur.

Combined2.png

// CoC-weighted diagonal blur
float2 blurDir = coc * invViewDims * float2(cos(-PI/6), sin(-PI/6));
float4 color = BlurTexture(verticalBlurTexture, uv, blurDir) * coc;

// Combine with the vertical blur 
// We don't need to divide by 2 here, because there is no overlap 
return float4(color.xyz + tex2D(verticalBlurTexture, uv).rgb, coc);

Which gives:

4_SceneBottomRight

Step 3 – Rhomboid Blur

The final step is the rhomboid blur.

This is done in two parts: via a 30 degrees (PI/6) blur, as well as its reflection at 150 degrees (5PI/6).

Combined3.png

// Get the center to determine the radius of the blur
float coc = tex2D(verticalBlurTexture, uv).a;
float coc2 = tex2D(diagonalBlurTexture, uv).a;

// Sample the vertical blur (1st MRT) texture with this new blur direction
float2 blurDir = coc * invViewDims * float2(cos(-PI/6), sin(-PI/6));
float4 color = BlurTexture(verticalBlurTexture, uv, blurDir) * coc;

// Sample the diagonal blur (2nd MRT) texture with this new blur direction
float2 blurDir2 = coc2 * invViewDims * float2(cos(-5*PI/6), sin(-5*PI/6));
float4 color2 = BlurTexture(diagonalBlurTexture, uv, blurDir2) * coc2;

// And we're done!
float3 output = (color.rgb + color2.rgb) * 0.5f;

Putting It All Together

Animation

As you can see, the code listed previously is pretty straightforward and should be a good base for you to achieve this blur. Additionally a code sample is provided here.

We can do better. Let’s do it in 2 passes!

Hexagonal Bokeh Blur Revisited

Hello Bokeh, My Old Friend

I’ve come to talk with you again…

It’s been a while. The last time we spoke to each other was back at SIGGRAPH 2011 in the Advances in Real-Time Rendering course with John White.

SIG2011.png

NFS.png
Separable Hexagonal Bokeh Depth-of-Field in Need For Speed: The Run

You’ve Been Around

Back then, we didn’t give out the code on how to achieve this effect. It turns out many developers out there were still able to realize it, solely based on John’s slides and notes!

WD1.png
Watch Dogs – Ubisoft

GRWL.png
Ghost Recon Wildlands – Ubisoft

Tom Clancy’s The Division – Ubisoft

MKXL.jpg
Mortal Kombat X – Netherrealm (WB Games)

SE2.png
Sniper Elite 2 – Rebellion


NBA 2K 2014

nba_2k17_22
NBA 2K 2017

gow.png
Gears of War: Ultimate Edition

Gjoell.pngMikkel Gjoel’s – ShaderToy

evanwallace.png
Evan Wallace’s – WebGL Lens Filter

We Meet Again?

While the technique presented six years ago has been showcased in a myriad of games and can be easily implemented in a straightforward way, it turns out some of the implementations out there have unfortunate visual artifacts. 😦

Artifacts.png

Luckily these artifacts can be easily solved! 😀

Back then, for the sake of time John left out information regarding circle-of-confusion management, as well as other details. If not handled correctly, this omission could lead to some unwanted artifacts, like in the images above. Again it’s been six years since the presentation, so I feel it’s time we set everything straight and clear these artifacts out.

The goal behind this post is to provide an “artifact-free” implementation, or rather a code companion complementary to the SIGGRAPH 2011 presentation. The code is hosted on Github.

I’m super busy, I want to blog more and I want to make this manageable, so this post is split in multiple parts. Once the whole series is done, I might collapse all the parts into something more concise.

In the meantime, thanks for stopping by! 🙂

Index

Acknowledgements

Thanks to John White for coming up with the original idea of “scatter-gather” separable hexagonal bokeh depth of field by rhomboid decomposition. I really miss the days when we used to work together, back when I was on Battlefield 3 and he was on NFS: The Run. Crazy-but-good times with lots of good exchanges. We shared a lot, and I sure learned a lot. Thanks John! 🙂

References

WHITE, John, and BARRÉ-BRISEBOIS, Colin. More Performance! Five Rendering Ideas From Battlefield 3 and Need For Speed: The Run, Advances in Real-Time Rendering in Games, SIGGRAPH 2011. Available Online.

Channeling Your Inner Light

An attempt at more blogging, but this happened in the meantime, which is why you might find some of tweets below to be from a few months ago. 😉

A topic of discussion that comes up every now and then between programmers, technical artists and lighting artists is the concept of light masking, or Lighting Channels, and whether this concept is still valid. I’ve had this discussion many times before with developers out there (and somehow I’m sure you have too). Artists and programmers alike, opinions diverge. To get a new sample on the matter I decided to ask the twitter-verse:

LightChannels0
Light Channels – Yay or Ney (Twitter Poll)

duel1

Yup, a division! Before we go over the discussion and the many answers people provided, which I will mix/interleave throughout this post to give perspective, let’s first cover some ground and make sure we all talk about the same thing.

Light(ing) Channels?

In layman’s terms, Lighting Channels (LC) is the functionality of masking lights on an per-object basis, or on a subset of objects that meet the masking criteria. Environment-only, character-only, and cinematic-only lights to name a few are examples that come to mind.

LightChannels3
Lighting Channels in UDK  – Point light affecting Dynamic objects tagged as Cinematic 1 [1]

This inclusion/exclusion concept allows lighting artists to have more fine-grained, manual control on light interactions. The image above describes a light affecting dynamic objects such as characters and others, but not the static environment. This case is especially common for cut-scenes, where lighting artists can clearly identify the key, fill and rim lights for each character, for each shot, to ensure that the art-directed lighting is manicured and behaves as expected.


3-point light setup example from a TV show

Lighting channels are not limited to characters and cut-scenes. Other classic examples come to mind, such as additional lights to manually enhance/fixup global illumination (ie.: faking/adding custom diffuse inter-reflection, or “bounce”), or additional lights for animated/hero objects.

Light Channels vs Light Merging

A bit of a side topic, though a concept often intertwined with light channels is light merging. In the context of forward lighting and how it was done back then, prior to tiled or clustered approaches [2] [3], iterating on all potentially affecting lights on a per object basis would greatly affect performance, especially on older hardware. To palliate this issue, dynamically lit objects were often lit by a subset of lights present in the scene: a select number of closest lights, or dynamically merging/coalescing lights [1] based on brightness/luminous flux, distance, or even using spherical harmonics [4] [5] to merge and extract the most relevant n-point (often 3). Lights could also be merged by taking their affecting channels into account.

What’s the connection with light channels you say? Well, more than just about being merged based on their channels, it turns out that while merging lights can provide a “good approximation” in certain scenarios of light interactions with dynamic objects without having to compute all interactions, you still end up with a discrepancy between lights that have been manually placed by artists and lights that have been merged for dynamic objects. To compensate, having additional “forced/non-merged” lights was often requested by lighting artists. Unfortunately this often led to too many of these lights, and we were back at square one with regards to the performance benefits of lighting with merged lights. Basically, a performance-affecting visual workaround to fix a performance work-around. It’s getting complicated…

A Workaround For Something Broken?

At this point you probably feel like something’s odd, not working, or simply that light channels are a hack for something broken in the way we light scenes. And you’d be right to think so. To put things in perspective, though we/programmers work really hard to constantly improve their representation and behavior, real-time lights in video games don’t generally behave the way they’re intuitively expected to. Of the many discrepancies, the following stand out:

  1. Shadows are commonly missing from many lights
    1. Only a few select (key) lights get shadows, not all of them.
    2. Shadowless lights shine through walls, and can hit unintended targets.
  2. Lights don’t (all) trigger inter-reflections / indirect illumination
    1. The lack of proper GI on all dynamic lights means only direct illumination.
    2. If your engine has real-time GI, most likely limited to a few lights.

The lack of good GI / inter-reflection / indirect illumination causes artists to want to manually add fixup secondary/fill lights to artificially simulate such effects, for both environments and characters, separately but sometimes simultaneously handling both cases. In practice this can work, but can easily become a mess of lights, unless you are very strict and handle these in separate layers. And even if you are organized, since most of these lights will not cast shadows, often causes things to now get hit with artificial lights that shouldn’t.

lc2 (1).png

One common example is the Fridge Mouth Effect, the glowing of characters’ mouths from fill lights that are intended to light characters faces, or enhance the lighting on the environment, but aren’t shadowed and end up lighting up the inside of characters’ mouths. A visual artifact also featured on ears with translucency and no self-shadowing.

5.png
Fridge Mouth Effect – Non-shadow casting fill light coming from the right, during a randomly positioned cutscene in Fallout 4

Artists then want to isolate where these fixups happen. They want to work around the shadowing and GI limitations by controlling where the light ends up. This is light channels come in, but also other exotic modifications to physically-plausible light attenuation, such as custom falloff curves. The latter is up for another discussion. 😉

Missing Shadows Feels Like a Big Deal. Is this it?

If we had shadows on everything, feels like most of these issues would be non-issues. At the end of the day, it’s also about fighting priorities: artist total control vs practical AAA production realities:

area1.png

Using flags to enforce rules to compensate for the lack of light/shadow behavior makes some sense if it weren’t the fact that 1) it doesn’t work well with deferred, 2) creates lighting discrepancies, 3) significantly increases scene management complexity for both art & code, and 4) breaks global illumination. In this day and age, with the sheer number of available dynamic lights, heavy usage of real-time & static shadow caching atlases, and more game teams working at solving real-time GI properly, feels like asking for light channels is a matter of convenience for an approach that used to work on previous generation titles, a consequence of getting used to the previous era lighting systems.

LC2.png

But I Really Want/Need To Make This Work With Deferred…

LightChannels4Simple G-Buffer, with Lighting Channel (LC)

In the case of deferred, dedicating a full channel for storing a bitmask is probably not what you want to do. If you really want to make this work, instead you can store a subset of essential lighting channels with a few bits “borrowed” from other channels. Nonetheless, you will have to figure out how this interacts with your forward path, your particles, your more complex multi-layer environments/scenes, and if it’s worth the hassle.

So, What’s The Conclusion?

Taking a step back and looking at how we use our tools, figuring out what works and what doesn’t with technology and rethinking our workflows is part of being game developers, and applies to all professions in the field. Never being satisfied and always looking at improving how we achieve better is necessary to our success, individually but also as an industry. Such an industry-wide challenge happened not too long ago with the first generation of games that showcased PBR. I recall discussions with friends who pioneered this at various studios, and it wasn’t easy to get everyone on board, until the industry saw the true value in the long term investment, and embraced the change. Now, it’s hard to go back. 😉

In the case of lighting approaches & tools, while we still have some major challenges to tackle with shadows and global illumination, maybe it’s also time to take a leap of faith, and think about the long term value of moving away from some old concepts such as light channels. That being said, I invite you to have this discussion with the various rendering programmers, lighting artists and technical artists at your studio, and get perspective on your game needs and figure out what needs to happen to get everyone on board with a solution that works for everybody, keep the conversations going, and blog about what worked for your project.

It’s not necessarily about the conclusion, but rather about the discussion. Looking forward to hearing about it, and how your investment in shadows and unified lighting & GI solutions has paid of in the long run. 😉

lc1

Addendum – Another Perspective From The Movie Industry

lightchannels5

Addendum – New York Times

I was asked by the New York Times if I could do a shorter version of this article, for a tech column:

LightChannelsNYT.jpgIn case you haven’t seen the original article. A few might find this amusing 😉

Thanks

Thanks to everyone who provided feedback by responding to the Twitter poll (Bart Wronski, Steve Anichini, Sébastien Lagarde, Paul Greveson, Don Williamson, Stephen Hill, and Jordan Walker), and especially Jon Greenberg and Nicolas Lopez for the additional feedback and conversations. Was nice to have both artists and programmers express their views. Let’s keep the conversations going, super important for our industry!

References

[1] Unreal Developer Kit (UDK), “Light Environments”, Online.

[2] Harada, Takahiro, “Forward+: Bringing Deferred Lighting To The Next Level”, EUROGRAPHIC  2012. Online.

[3] Olsson, Ola. “Clustered Deferred and Forward Shading”, HPG 2012, Online.

[4] Greenberg, Jon. “Hitting 60Hz in Unreal Engine”, GDC 2009. Online.

[5] Greenberg, Jon. “Dynamic Lighting in Mortal Kombat vs DC Universe”, 2012, Online

Finding Next-Gen – Part I – The Need For Robust (and Fast) Global Illumination in Games

Figure 1: Direct and Indirect Illumination from a single directional light source. [1]

This post is part of the series “Finding Next-Gen“. Original version on 2015/11/08. Liveblogging, because opinions evolve over time.

Global Illumination?

Global illumination (GI) is a family of algorithms used in computer graphics that simulate how light interacts and transfers between objects in a scene. With its roots in the Light Transport Theory (the mathematics behind energy, how it transfers between various media, and leads to visibility), GI takes into account both the light that comes directly from a light source (direct lighting/illumination), as well as how this light is reflected by and onto other surfaces (indirect lighting/illumination).

As seen in Figure 1, global illumination greatly increases the visual quality of a scene by providing a rich, organic and physically convincing simulation of light. Rather than solely depending on a manual (human) process to achieve the desired look, the mathematics behind GI allow lighting artists to create visually convincing scenes without having to worry about how they can manually replicate the complexity behind effects such as light scattering, color bleeding, or other visuals that are difficult to represent artistically using only direct illumination.

Continue reading “Finding Next-Gen – Part I – The Need For Robust (and Fast) Global Illumination in Games”

Finding Next-Gen: Index

It’s Been Awhile…

This is the index page for a series of blog posts I’m currently writing about some challenges in real-time rendering and my perspective on these topics.

In no way is this an attempt to sum it all up or provide perfect solutions, but rather add to the discussion on topics that are close and resonate with me, while respecting my various NDAs. What you will find here is undeniably inspired and fueled by the various presentations and discussions from the latest conferences, as well as from various discussions where graphics programmers tend to hang out. The following wouldn’t be possible without this amazing community of developers that share on a daily basis – thanks to everyone for the inspiration and for always sharing your discoveries and opinions! Much needed for progress.

Also, this page will most likely evolve and change. Some topics might appear, be grouped, and some might greatly change pending on how much content I can put together. Feel free to come back and check  this page over time.

Please leave comments if need be, and thanks for reading!

Finding Next-Gen

Deformable Snow and DirectX 11 in Batman: Arkham Origins

It’s been a while, but I finally found some time for a quick post to regroup the presentations I’ve done this year at the Game Developers Conference (GDC) and NVIDIA’s GPU Technology Conference (GTC). These presentations showcase and explain some of the features developed for Batman: Arkham Origins.

Continue reading “Deformable Snow and DirectX 11 in Batman: Arkham Origins”

Blending Normal Maps?

What is the best way to blend two normal maps together? Why can’t I just add two normal maps together in Photoshop? I heard that to combine two normals together, you need to add the positive components and subtract the negative components, then renormalize. Looks right to me… Why shouldn’t I be using Overlay (or a series of Photoshop blend modes) to blend normal maps together? I want to add detail to surfaces. How does one combine normal maps in real-time so that the detail normal map follows the topology described by the base normal map?

If this is something you’ve heard before, something you’ve asked yourself, check out this article, written together with Stephen Hill (@self_shadow) on the topic of blending normal maps.

Continue reading “Blending Normal Maps?”