Tue
20
Oct 2020
This will be an advanced article. It assumes you not only know how to write shaders but also how they work on a low level (like vector versus scalar registers) and how to optimize them using scalarization. It all starts from a need to index into an array of texture or buffer descriptors, where the index is dynamic – it may vary from pixel to pixel. This is useful e.g. when doing bindless-style rendering or blending various layers of textures e.g. on a terrain. To make it working properly in a HLSL shader, you need to surround the indexing operation with a pseudo-function NonUniformResourceIndex
. See also my old blog post “Direct3D 12 - Watch out for non-uniform resource index!”.
Texture2D g_Textures[] : register(t1);
...
return g_Textures[NonUniformResourceIndex(textureIndex)].Load(pos);
In many cases, it is enough. The driver will do its magic to make things working properly. But if your logic dependent on textureIndex
is more complex than a single Load
or SampleGrad
, e.g. you sample multiple textures or do some calculations (let's call it MyDynamicTextureIndexing
), then it might be beneficial to scalarize the shader manually using a loop and wave functions from HLSL Shader Model 6.0, sometimes also called a "peeling loop".
I learned how to do scalarization from the 2-part article “Intro to GPU Scalarization” by Francesco Cifariello Ciardi and the presentation “Improved Culling for Tiled and Clustered Rendering” by MichaĆ Drobot, linked from it. Both sources propose an implementation like the following HLSL snippet:
// WORKING, TRADITIONAL
float4 color = float4(0.0, 0.0, 0.0, 0.0);
uint currThreadIndex = WaveGetLaneIndex();
uint2 currThreadMask = uint2(
currThreadIndex < 32 ? 1u << currThreadIndex : 0,
currThreadIndex < 32 ? 0 : 1u << (currThreadIndex - 32));
uint2 activeThreadsMask = WaveActiveBallot(true).xy;
while(any(currThreadMask & activeThreadsMask) != 0)
{
uint scalarTextureIndex = WaveReadLaneFirst(textureIndex);
uint2 scalarTextureIndexThreadMask = WaveActiveBallot(scalarTextureIndex == textureIndex).xy;
activeThreadsMask &= ~scalarTextureIndexThreadMask;
[branch]
if(scalarTextureIndex == textureIndex)
{
color = MyDynamicTextureIndexing(textureIndex);
}
}
return color;
It involves a bit mask of active threads. From the moment I first saw this code, I started wondering: Why is it needed? A mask of threads that still want to continue spinning the loop is already maintained implicitly by the shader compiler. Couldn't we just break;
from the loop when done with the textureIndex
of the current thread?! So I wrote this short piece of code:
// BAD, CRASHES
float4 color = float4(0.0, 0.0, 0.0, 0.0);
while(true)
{
uint scalarTextureIndex = WaveReadLaneFirst(textureIndex);
[branch]
if(scalarTextureIndex == textureIndex)
{
color = MyDynamicTextureIndexing(textureIndex);
break;
}
}
return color;
…and it crashed my GPU. At first I thought it may be a bug in the shader compiler, but then I recalled footnote [2] in part 2 of the scalarization tutorial, which mentions an issue with helper lanes. Let me elaborate on this. When a shader is executed in SIMT fashion, individual threads (lanes) may be active or inactive. Active lanes are these that do their job. Inactive lanes may be inactive from the very beginning because we are at the edge of a triangle so there are not enough pixels to make use of all the lanes or may be disabled temporarily because e.g. we are executing an if
section that some threads didn't want to enter. But in pixel shaders there is a third kind of lanes – helper lanes. These are used instead of inactive lanes to make sure full 2x2 quads always execute the code, which is needed to calculate derivatives ddx/ddy, also done explicitly when sampling a texture to calculate the correct mip level. A helper lane executes the code (like an active lane), but doesn't export its result to the render target (like an inactive lane).
As it turns out, helper lanes also don't contribute to wave functions – they work like inactive lanes. Can you already see the problem here? In the loop shown above, it may happen than a helper lane has its textureIndex
different from any active lanes within a wave. It will then never get its turn to process it in a scalar fashion, so it will fall into an infinite loop, causing GPU crash (TDR)!
Then I thought: What if I disable helper lanes just once, before the whole loop? So I came up with the following shader. It seems to work fine. I also think it is better than the first solution, as it operates on the thread bit mask only once at the beginning and so uses fewer variables to be stored in GPU registers and does fewer calculations in every loop iteration. Now I'm thinking whether there is something wrong with my idea that I can't see now? Or did I just invent a better way to scalarize shaders?
// WORKING, NEW
float4 color = float4(0.0, 0.0, 0.0, 0.0);
uint currThreadIndex = WaveGetLaneIndex();
uint2 currThreadMask = uint2(
currThreadIndex < 32 ? 1u << currThreadIndex : 0,
currThreadIndex < 32 ? 0 : 1u << (currThreadIndex - 32));
uint2 activeThreadsMask = WaveActiveBallot(true).xy;
[branch]
if(any((currThreadMask & activeThreadsMask) != 0))
{
while(true)
{
uint scalarTextureIndex = WaveReadLaneFirst(textureIndex);
[branch]
if(scalarTextureIndex == textureIndex)
{
color = MyDynamicTextureIndexing(textureIndex);
break;
}
}
}
return color;
UPDATE 2020-10-28: There are some valuable comments under my tweet about this topic that I recommend to check out.
UPDATE 2021-12-03: As my colleague pointed out today, the code I showed above as "BAD" is perfectly fine for compute shaders. Only in pixel shaders we have problems with helper lanes. Thank you Steffen!
Comments | #gpu #optimization #directx Share