Blog

# A Better Way to Scalarize a Shader

Tue
20
Oct 2020

This will be an advanced article. It assumes you not only know how to write shaders but also how they work on a low level (like vector versus scalar registers) and how to optimize them using scalarization. It all starts from a need to index into an array of texture or buffer descriptors, where the index is dynamic – it may vary from pixel to pixel. This is useful e.g. when doing bindless-style rendering or blending various layers of textures e.g. on a terrain. To make it working properly in a HLSL shader, you need to surround the indexing operation with a pseudo-function NonUniformResourceIndex. See also my old blog post “Direct3D 12 - Watch out for non-uniform resource index!”.

Texture2D g_Textures[] : register(t1);
...
return g_Textures[NonUniformResourceIndex(textureIndex)].Load(pos);

In many cases, it is enough. The driver will do its magic to make things working properly. But if your logic dependent on textureIndex is more complex than a single Load or SampleGrad, e.g. you sample multiple textures or do some calculations (let's call it MyDynamicTextureIndexing), then it might be beneficial to scalarize the shader manually using a loop and wave functions from HLSL Shader Model 6.0.

I learned how to do scalarization from the 2-part article “Intro to GPU Scalarization” by Francesco Cifariello Ciardi and the presentation “Improved Culling for Tiled and Clustered Rendering” by Michał Drobot, linked from it. Both sources propose an implementation like the following HLSL snippet:

// WORKING, TRADITIONAL
float4 color = float4(0.0, 0.0, 0.0, 0.0);
uint currThreadIndex = WaveGetLaneIndex();
uint2 currThreadMask = uint2(
   currThreadIndex < 32 ? 1u << currThreadIndex : 0,
   currThreadIndex < 32 ? 0 : 1u << (currThreadIndex - 32));
uint2 activeThreadsMask = WaveActiveBallot(true).xy;
while(any(currThreadMask & activeThreadsMask) != 0)
{
   uint scalarTextureIndex = WaveReadLaneFirst(textureIndex);
   uint2 scalarTextureIndexThreadMask = WaveActiveBallot(scalarTextureIndex == textureIndex).xy;
   activeThreadsMask &= ~scalarTextureIndexThreadMask;
   [branch]
   if(scalarTextureIndex == textureIndex)
   {
       color = MyDynamicTextureIndexing(textureIndex);
   }
}
return color;

It involves a bit mask of active threads. From the moment I first saw this code, I started wondering: Why is it needed? A mask of threads that still want to continue spinning the loop is already maintained implicitly by the shader compiler. Couldn't we just break; from the loop when done with the textureIndex of the current thread?! So I wrote this short piece of code:

// BAD, CRASHES
float4 color = float4(0.0, 0.0, 0.0, 0.0);
while(true)
{
   uint scalarTextureIndex = WaveReadLaneFirst(textureIndex);
   [branch]
   if(scalarTextureIndex == textureIndex)
   {
       color = MyDynamicTextureIndexing(textureIndex);
       break;
   }
}
return color;

…and it crashed my GPU. At first I thought it may be a bug in the shader compiler, but then I recalled footnote [2] in part 2 of the scalarization tutorial, which mentions an issue with helper lanes. Let me elaborate on this. When a shader is executed in SIMT fashion, individual threads (lanes) may be active or inactive. Active lanes are these that do their job. Inactive lanes may be inactive from the very beginning because we are at the edge of a triangle so there are not enough pixels to make use of all the lanes or may be disabled temporarily because e.g. we are executing an if section that some threads didn't want to enter. But in pixel shaders there is a third kind of lanes – helper lanes. These are used instead of inactive lanes to make sure full 2x2 quads always execute the code, which is needed to calculate derivatives ddx/ddy, also done explicitly when sampling a texture to calculate the correct mip level. A helper lane executes the code (like an active lane), but doesn't export its result to the render target (like an inactive lane).

As it turns out, helper lanes also don't contribute to wave functions – they work like inactive lanes. Can you already see the problem here? In the loop shown above, it may happen than a helper lane has its textureIndex different from any active lanes within a wave. It will then never get its turn to process it in a scalar fashion, so it will fall into an infinite loop, causing GPU crash (TDR)!

Then I thought: What if I disable helper lanes just once, before the whole loop? So I came up with the following shader. It seems to work fine. I also think it is better than the first solution, as it operates on the thread bit mask only once at the beginning and so uses fewer variables to be stored in GPU registers and does fewer calculations in every loop iteration. Now I'm thinking whether there is something wrong with my idea that I can't see now? Or did I just invent a better way to scalarize shaders?

// WORKING, NEW
float4 color = float4(0.0, 0.0, 0.0, 0.0);
uint currThreadIndex = WaveGetLaneIndex();
uint2 currThreadMask = uint2(
   currThreadIndex < 32 ? 1u << currThreadIndex : 0,
   currThreadIndex < 32 ? 0 : 1u << (currThreadIndex - 32));
uint2 activeThreadsMask = WaveActiveBallot(true).xy;
[branch]
if(any((currThreadMask & activeThreadsMask) != 0))
{
   while(true)
   {
       uint scalarTextureIndex = WaveReadLaneFirst(textureIndex);
       [branch]
       if(scalarTextureIndex == textureIndex)
       {
           color = MyDynamicTextureIndexing(textureIndex);
           break;
       }
   }
}
return color;

Comments | #gpu #directx #optimization Share

# Which Values Are Scalar in a Shader?

Wed
14
Oct 2020

GPUs are highly parallel processors. Within one draw call or a compute dispatch there might be thousands or millions of invocations of your shader. Some variables in such a shader have constant value for all invocations in the draw call / dispatch. We can call them constant or uniform. A literal constant like 23.0 is surely such a value and so is a variable read from a constant (uniform) buffer, let’s call it cbScaleFactor, or any calculation on such data, like (cbScaleFactor.x + cbScaleFactor.y) * 2.0 - 1.0.

Other values may vary from thread to thread. These will surely be vertex attributes, as well as system value semantics like SV_Position in a pixel shader (denoting the position of the current pixel on the screen), SV_GroupThreadID in a compute shader (identifier of the current thread within a thread group), and any calculations based on them. For example, sampling a texture using non-constant UV coordinates will result in a non-constant color value.

But there is another level of grouping of threads. GPU cores (Compute Units, Execution Units, CUDA Cores, however we call them) execute a number of threads at once in a SIMD fashion. It would be more correctly to say SIMT. For the explanation of the difference see my old post: “How Do Graphics Cards Execute Vector Instructions?” It’s usually something like 8, 16, 32, 64 threads executing on one core, together called a wave in HLSL and a subgroup in GLSL.

Normally you don’t need to care about this fact. However, recent versions of HLSL and GLSL added intrinsic functions that allow to exchange data between lanes (threads/invocations within a wave/subgroup) - see “HLSL Shader Model 6.0” or “Vulkan Subgroup Tutorial”. Using them may allow to optimize shader performance.

This another level of grouping yields a possibility for a variable to be or not to be uniform (to have the same value) across a single wave, even if it’s not constant across the entire draw call or dispatch. We can also call it scalar, as it tends to go to scalar registers (SGPRs) rather than vector registers (VGPRs) on AMD architecture, which is overall good for performance. Simple cases like the ones I mentioned above still apply. What’s constant across the entire draw call is also scalar within a wave. What varies from thread to thread is not scalar. Some wave functions like WaveReadLaneFirst, WaveActiveMax, WaveActiveAllTrue return the same value for all threads, so it’s always scalar.

Knowing which values are scalar and which ones may not be is necessary in some cases. For example, indexing buffer or texture array requires special keyword NonUniformResourceIndex if the index is not uniform across the wave. I warned about it in my blog post “Direct3D 12 - Watch out for non-uniform resource index!”. Back then I was working on shader compiler at Intel, helping to finish DX12 implementation before the release of Windows 10. Now, 5 years later, it is still a tricky thing to get right.

Another such case is a function WaveReadLaneAt which “returns the value of the expression for the given lane index within the specified wave”. The index of the lane to fetch was required to be scalar, but developers discovered it actually works fine to use a dynamically varying value for it, like Ken Hu in his blog post “HLSL pitfalls”. Now Microsoft formally admitted that it is working and allowed LaneIndex to be any value by making this GitHub commit to their documentation.

If this is so important to know where an argument needs to be scalar and which values are scalar, you should also know about some less obvious, tricky ones.

SV_GroupID in compute shader – identifier of the group within a compute dispatch. This one surely is uniform across the wave. I didn’t search specifications for this topic, but it seems obvious that if a groupshared memory is private to a thread group and a synchronization barrier can be issued across a thread group, threads from different groups cannot be assigned to a single wave. Otherwise everything would break.

SV_InstanceID in vertex shader – index of an instance within an instanced draw call. It looks similar, but the answer is actually opposite. I’ve seen discussions about it many times. It is not guaranteed anywhere that threads in one wave will calculate vertices of the same instance. While inconvenient for those who would like to optimize their vertex shader using wave functions, it actually gives a graphics driver an opportunity to increase utilization by packing vertices from multiple instances into one wave.

SV_GroupThreadID.xyz in compute shader – identifier of the thread within a thread group in a particular dimension. Article “Porting Detroit: Become Human from PlayStation® 4 to PC – Part 2” on GPUOpen.com suggests that by using [numthreads(64,2,1)], you can be sure that waves will be dispatched as 32x1x1 or 64x1x1, so that SV_GroupThreadID.y will be scalar across a wave. It may be true for AMD architecture and other GPUs currently on the market, so relying on this may be a good optimization opportunity on consoles with a known fixed hardware, but it is not formally correct to assume this on any PC. Neither D3D nor Vulkan specification says that threads from a compute thread group are assigned to waves in row-major order. The order is undefined, so theoretically a driver in a new version may decide to spawn waves of 16x2x1. It is also not guaranteed that some mysterious new GPU couldn’t appear in the future that is 128-lane wide. WaveGetLaneCount function says “the result will be between 4 and 128”. Such GPU would execute entire 64x2x1 group as a single wave. In both cases, SV_GroupThreadID.y wouldn’t be scalar.

Long story short: Unless you can prove otherwise, always assume that what is not uniform (constant) across the entire draw call or dispatch is also not uniform (scalar) across the wave.

Comments | #gpu #directx #vulkan #optimization Share

# System Value Semantics in Compute Shaders - Cheat Sheet

Tue
29
Sep 2020

After compute shaders appeared, programmers no longer need to pretend they do graphics and render pixels when they want to do some general-purpose computations on a GPU (GPGPU). They can just dispatch a shader that reads and writes memory in a custom way. Such shader is a short (or not so short) program to be invoked thousands or millions of times to process a piece of data. To work correctly, it needs to know which is the current thread. Threads (invocations) of a compute shader are not just indexed linearly as 0, 1, 2, ... It's more complex than that. Their indexing can use up to 3 dimensions, which simplifies operation on some data like images or matrices. They also come in groups, with the number of threads in one group declared statically as part of the shader code and the number of groups to execute passed dynamically in CPU code when dispatching the shader.

This raises a question of how to identify the current thread. HLSL offers a number of system-value semantics for this purpose and so does GLSL by defining equivalent built-in variables. For long time I couldn't remember their names, as the ones in HLSL are quite misleading. If GroupID is an ID of the entire group, and GroupThreadID is an ID of the thread within a group, GroupIndex should be a flattened index of the entire group, right? Wrong! It's actually an index of a single thread within a group. GLSL is more consistent in this regard, clearly stating "WorkGroup" versus "Invocation" and "Local" versus "Global". So, although Microsoft provides a great explanation of their SVs with a picture on pages like SV_DispatchThreadID, I thought it would be nice to gather all this in form of a table, a small cheat sheet:

HLSL SemanticsGLSL VariableType (Dimension)UnitReference
SV_GroupIDgl_WorkGroupIDuint3 (3D)Entire groupGlobal in dispatch
SV_GroupThreadIDgl_LocalInvocationIDuint3 (3D)Single threadLocal in group
SV_DispatchThreadIDgl_GlobalInvocationIDuint3 (3D)Single threadGlobal in dispatch
SV_GroupIndexgl_LocalInvocationIndexuint (flattened)Single threadLocal in group

Comments | #gpu #directx #opengl #vulkan Share

# AquaFish 2 - My Game From 2009

Thu
06
Aug 2020

I've made a short video showing a game I developed more than 10 years ago: AquaFish 2. It was my first commercial project, published by Play Publishing, developed using my custom engine The Final Quest.

Comments | #productions #history Share

# Why Not Use Heterogeneous Multi-GPU?

Wed
22
Jul 2020

There was an interesting discussion recently on one Slack channel about using integrated GPU (iGPU) together with discrete GPU (dGPU). Many sound ideas were said there, so I think it's worth writing them down. But because I probably never blogged about multi-GPU before, few words of introduction first:

The idea to use multiple GPUs in one program is not new, but not very widespread either. In old graphics APIs like Direct3D 11 it wasn't easy to implement. Doing it right in a complex game often involved engaging driver engineers from the GPU manufacturer (like AMD, NVIDIA) or using custom vendor extensions (like AMD GPU Services - see for example Explicit Crossfire API).

New generation of graphics APIs – Direct3D 12 and Vulkan – are lower level, give more direct access to the hardware. This includes the possibility to implement multi-GPU support on your own. There are two modes of operation. If the GPUs are identical (e.g. two graphics cards of the same model plugged to the motherboard), you can use them as one device object. In D3D12 you then index them as Node 0, Node 1, ... and specify NodeMask bit mask when allocating GPU memory, submitting commands and doing all sorts of GPU things. Similarly, in Vulkan you have VK_KHR_device_group extension available that allows you to create one logical device object that will use multiple physical devices.

But this post is about heterogeneous/asymmetric multi-GPU, where there are two different GPUs installed in the system, e.g. one integrated with the CPU and one discrete. A common example is a laptop with "switchable graphics", which may have an Intel CPU with their integrated “HD” graphics plus a NVIDIA GPU. There may even be two different GPUs from the same manufacturer! My new laptop (ASUS TUF Gaming FX505DY) has AMD Radeon Vega 8 + Radeon RX 560X. Another example is a desktop PC with CPU-integrated graphics and a discrete graphics card installed. Such combination may still be used by a single app, but to do that, you must create and use two separate Device objects. But whether you could, doesn't mean you should…

First question is: Are there games that support this technique? Probably few… There is just one example I heard of: Ashes of the Singularity by Oxide Games, and it was many years ago, when DX12 was still fresh. Other than that, there are mostly tech demos, e.g. "WITCH CHAPTER 0 [cry]" by Square Enix as described on DirectX Developer Blog (also 5 years old).

iGPU typically has lower computational power than dGPU. It could accelerate some pieces of computations needed each frame. One idea is to hand over the already rendered 3D scene to the iGPU so it can finish it with screen-space postprocessing effects and present it, which sounds even better if the display is connected to iGPU. Another option is to accelerate some computations, like occlusion culling, particles, or water simulation. There are some excellent learning materials about this technique. The best one I can think of is: Multi-Adapter with Integrated and Discrete GPUs by Allen Hux (Intel), GDC 2020.

However, there are many drawbacks of this technique, which were discussed in the Slack chat I mentioned:

  • It's difficult to implement multi-GPU support in general and to synchronize things properly.
  • iGPUs have greatly varying performance, from quite fast to very slow, so implementing it to always give a performance uplift is even harder.
  • Passing data back and forth between dGPU and iGPU involves multiple copies. The cost of it may be larger than the performance benefit of computing on iGPU.
  • iGPU shares same power and thermal limitations, memory bandwidth, and caches as the CPU, so they may slow each other down.
  • If you offload finishing render frame (postprocessing and Present) to iGPU, you may improve throughput a bit, but you increase latency a lot.
  • You need to support systems without iGPU as well, so your testing matrix expands. (An interesting idea was posted that if it's a DirectX workload, you might fall back to the software emulated WARP device – it's quite efficient and good quality in terms of correctness and compliance with GPU-accelerated DX).
  • Finishing and presenting a frame on iGPU sounds like a good idea if the display is connected to iGPU, but it's not so certain. Multi-GPU laptops usually have the build-in display connected to the iGPU, but external display output (e.g. HDMI) may be connected to iGPU or to dGPU (especially in "gaming laptops") – you never know.
  • Conscious gamers tend to update their graphics drivers for dGPU, but the driver for iGPU is often left in an ancient version, full of bugs.

Conclusion: Supporting heterogeneous multi-GPU in a game engine sounds like an interesting technical challenge, but better think twice before doing it in a production code.

BTW If you just want to use just one GPU and worry about the selection of the right one, see my old post: Switchable graphics versus D3D11 adapters.

Comments | #rendering #directx #vulkan #microsoft Share

# How to Disable Notification Sound in Messenger for Android?

Thu
09
Jul 2020

Applications and websites fight for our attention. We want to stay connected and informed, but too many interruptions are not good for our productivity or mental health. Different applications have different settings dedicated to silencing notifications. I recently bought a new smartphone and so I needed to install and configure all the apps (which is a big task these days, same way as it always used to be with Windows PC after "format C:" and system reinstall).

Facebook Messenger for Android offers an on/off setting for all the notifications, and a choice of the sound of a notification and an incoming call. Unfortunately, it doesn't offer an option to silence the sound. You can only either choose among several different sound effects or disable all notifications of the app entirely. What if you want to keep notifications active so they appear in the Android drawer, use vibration, get sent to a smart band, and you can hear incoming calls ringing, you just want to mute the sound of incoming messages?

Here is the solution I found. It turns out you can upload a custom sound file to your smartphone and use it. For that I generated a small WAV file - 0.1 seconds of total silence. 1) You can download it from here:

Silence_100ms.wav (8.65 KB)

2) Now you need to put it into a specific directory in the memory of your smartphone, called "Notifications". To do this, you need to use an app that allows to freely manipulate files and directories, as opposed to just looking for specific content as image or music players do. If you downloaded the file directly to your smartphone, use free Total Commander to move this file to the "Notifications" directory. If you have it on your PC, MyPhoneExplorer will be a good app to connect to your phone using a USB cable or WiFi network and transfer the file.

3) Finally, you need to select the file in Messenger. To do this, go to its settings > Notifications & Sounds > Notification Sound. The new file "Silence_100ms" should appear mixed with the list of default sound effects. After choosing it, your message notifications in Messenger will be silent.

Facebook Messenger Android Notification Sound Silence

There is one downside of this method. While not audible, the sound is still playing on every incoming message, so if you listen to music e.g. using Spotify, the music will fade out for a second every time the sound is played.

Comments | #android #mobile Share

# Avoid double negation, unless...

Thu
11
Jun 2020

Boolean algebra is a branch of mathematics frequently used in computer science. It deals with simple operations like AND, OR, NOT - which we also use in natural language.

In programming, two negations of a boolean variable cancel each other. You could express it in C++ as:

!!x == x

In natural languages, it's not the case. In English we don't use double negatives (unless it's intended, like in the famous song quote "We don't need no education" :) In Polish double negatives are used but don't result in a positive statement. For example, we say "Wegetarianie nie jedzą żadnego rodzaju mięsa.", which literally translates as "Vegetarians don't eat no kind of meat."

No matter what your native language is, in programming it's good to simplify things and so to avoid double negations. For example, if you design an API for your library and you need to name a configuration setting about an optional optimization Foo, it's better to call it "FooOptimizationEnabled". You can then just check if it's true and if so, you do the optimization. If the setting was called "FooOptimizationDisabled", its documentation could say: "When the FooOptimizationDisabled setting is disabled, then the optimization is enabled." - which sounds confusing and we don't want that.

But there are two cases where negative flags are justified in the design of an API. Let me give examples from the world of graphics.

1. When enabled should be the default. It's easier to set flags to 0 or to a minimum set of required flags rather than thinking about each available flag whether or not it should be set, so a setting that most users will need enabled and disable only on special occasions could be a negative flag. For example, Direct3D 12 has D3D12_HEAP_FLAG_DENY_BUFFERS. A heap you create can host any GPU resources by default (buffers and textures), but when you use this flag, you declare you will not use it for buffers. (Note to graphics programmers: I know this is not the best example, because usage of these flags is actually required on Resource Heap Tier 1 and there are also "ALLOW" flags, but I hope you get the idea.)

2. For backward compatibility. If something has always been enabled in previous versions and you want to give users a possibility to disable it in the new version of your library, you don't want to break their old code or ask them to update their code everywhere adding the new "enable" flag, so it's better to add a negative flag that will disable the otherwise still enabled feature. That's what Vulkan does with VK_PIPELINE_CREATE_DISABLE_OPTIMIZATION_BIT. There was no such flag in Vulkan 1.0 - pipelines were always optimized. The latest Vulkan 1.2 has it so you can ask to disable optimization, which may speed up the creation of the pipeline. All the existing code that doesn't use this flag continues to have its pipelines optimized.

Comments | #software engineering Share

# On Debug, Release, and Other Project Configurations

Sun
17
May 2020

Foreword: I was going to write a post about #pragma optimize in Visual Studio, which I learned recently, but later I decided to describe the whole topic more broadly. As a result, this blog post can be useful or inspiring to every programmer coding in C++ or even in other languages, although I give examples based on just C++ as used in Microsoft Visual Studio on Windows.

When we compile a program, we often need to choose one of possible "configurations". Visual Studio creates two of those for a new C++ project, called "Debug" and "Release". As their names imply, the first one is mostly intended to be used during development and debugging, while the other should be used to generate the final binary of the program to be distributed to the external users. But there is more to it. Each of these configurations actually sets multiple parameters of the project and you can change them. You can also define your custom configurations and have over 2 of them, which can be very useful, as you will see later.

First, let's think about the specific settings that are defined by a project configuration. They can be divided into two broad categories. First one is all the parameters that control the compiler and linker. The difference between Debug and Release is mostly regarding optimizations. Debug configuration is all about having the optimizations disabled (which allows full debugging functionality and also makes the compilation time short), while Release has the optimizations enabled (which obviously makes the program run faster). For example, Visual Studio sets these options in Release:

  • /O2 - Optimization = Maximum Optimization (Favor Speed)
  • /Oi - Enable Intrinsic Functions = Yes
  • /Gy - Enable Function-Level Linking = Yes
  • /GL - Whole Program Optimization = Yes
  • /OPT:REF - linker option: References = Yes
  • /OPT:ICF - linker option: Enable COMDAT Folding = Yes
  • /LTCG:incremental - linker option: Link Time Code Generation = Use Fast Link Time Code Generation

Visual Studio also inserts an additional code in Debug configuration to fill memory with some bit pattern that helps with debugging low-level memory access errors, which are plaguing C and C++ programmers. For example, seeing 0xCCCCCCCC in the debugger usually means uninitialized memory on the stack, 0xCDCDCDCD - allocated but uninitialized memory on the heap, and 0xFEEEFEEE - memory that was already freed and should no longer be used. In Release, memory under such incorrectly used pointers will just hold its previous data.

The second category of things controlled by project configurations are specific features inside the code. In case of C and C++ these are usually enabled and disabled using preprocessor macros, like #ifdef, #if. Such macros can not only be defined inside the code using #define, but also passed from the outside, among the parameters of the compiler, and so they can be set and changed depending on the project configuration.

The features controlled by such macros can be very diverse. Probably the most canonical example is the standard assert macro (or your custom equivalent), which we define to some error logging, instruction to break into the debugger, or even complete program termination in Debug config, and to an empty macro in Release. In case of C++ in Visual Studio, the macro defined in Debug is _DEBUG, in Release - NDEBUG, and depending on the latter, standard macro assert is doing "something" or is just ignored.

There are more possibilities. Depending on these standard pre-defined macros or your custom ones, you can cut out different functionalities from the code. One example is any instrumentation that lets you analyze and profile its execution (like calls to Tracy). You probably don't want it in the final client build. Same with detailed logging functionality, any hidden developer setting or cheat codes (in case of games). On the other hand, you may want to include in the final build something that's not needed during development, like checking user's license, some anti-piracy or anti-cheat protection, and generation of certificates needed for the program to work on non-developer machines.

As you can see, there are many options to consider. Sometimes it can make sense to have over 2 project configurations. Probably the most common case is a need for a "Profile" configuration that allows to measure the performance accurately - has all the compiler optimizations enabled, but still keeps the instrumentation needed for profiling in the code. Another idea would be to wrap the super low level, frequently called checks like (index < size()) inside vector::operator[] into some separate macro called HEAVY_ASSERT and have some configuration called "SuperDebug" that we know works very slowly, but has all those checks enabled. On the other end, remember that the "FinalFinal" configuration that you will use to generate the final binary for the users should be build and tested in your Continuous Integration during development, not only one week before the release date. Bugs that occur in only one configuration and not in the others are not uncommon!

Some bugs just don't happen in Debug, e.g. due to uninitialized memory containing consistent 0xCCCCCCCC instead of garbage data, or a race condition between threads not occurring because of a different time it takes to execute certain functions. In some projects, the Debug configuration works so slowly that it's not even possible to test the program on a real, large data set in this configuration. I consider it a bad coding practice and I think it shouldn't happen, but it happens quite often, especially when STL is used, where every reference to myVector[i] element in the unoptimized code is a function call with a range check instead of just pointer dereferecence. In any case, sometimes we need to investigate bugs occurring in Release configuration. Not all hope is lost then, because in Visual Studio the debugger still works, just not as reliably as in Debug. Because of optimizations made by the compiler, the instruction pointer (yellow arrow) may jump across the code inconsistently, and some variables may be impossible to preview.

Here comes the trick that inspired me to write this whole blog post. I recently learned that there is this custom Microsoft preprocessor macro:

#pragma optimize("", off)

that if you put at the beginning of a .cpp file or just before your function of interest, disables all the compiler optimizations from this point until the end of the file, making its debugging nice and smooth, while the rest of the program behaves as before. (See also its documentation.) A nice trick!

Comments | #c++ #visual studio Share

Older entries >

Twitter

Pinboard Bookmarks

LinkedIn

Blog Tags

[Download] [Dropbox] [pub] [Mirror] [Privacy policy]
Copyright © 2004-2020