Entries for tag "visual studio", ordered from most recent. Entry count: 62.
# Debugging third-party apps in Visual Studio
Mon
29
Apr 2024
If you are a programmer coding mostly in C++ for Windows, as I do, you likely use Microsoft Visual Studio as the IDE, including its code editor and debugger. When using Visual Studio for development, we typically compile, launch, and debug the code we developed. However, Visual Studio can also be used for debugging third-party executables. Having neither debug symbols (.pdb files) nor the source code of the application will result in a limited debugging experience, but it can still be useful in some cases. In this article, I will share a few tips and tricks for using the Visual Studio debugger to debug external .exe files.
I think this article is suitable for programmers with all levels of experience, also for beginners. We won't be looking at the x86 assembly code, I promise! 😀 All the screenshots are made with Visual Studio 2022 version 17.9.6 on Windows 10.
Comments | #c++ #visual studio Share
# Why I Catch Exceptions in Main Function in C++
Sun
22
Jan 2023
Exception handling in C++ is a controversial topic. On one hand, it can be a good means of reporting and handling errors if done correctly. For it to be free from memory leaks, all memory allocations should be wrapped in smart pointers and other acquired resources (opened files and other handles) wrapped in similar RAII objects. On the other hand, it has been proven many times that the exception mechanism in C++ works very slow. Disabling exception handling in C++ compiler options can speed up the program significantly. No wonder that game developers dislike and disable them completely.
Let’s talk about a command-line C++ program that doesn’t need to disable exception handling in compiler options. Even if it doesn’t use exceptions explicitly, some exceptions may occur, thrown by C++ standard library or some third-party libraries. When a C++ exception is thrown and uncaught, program terminates and process exit code is some large negative number. On my system it is -1073740791 = 0xC0000409.
It would be good if the program printed some error message in such case and returned some custom, clearly defined exit code. Therefore, when developing a command-line C++ program, I like to catch and handle exceptions in the main
function, like this:
#include <exception>
#include <cstdio>
enum PROGRAM_EXIT {
PROGRAM_EXIT_SUCCESS = 0,
PROGRAM_EXIT_FAILED = -1,
PROGRAM_EXIT_EXCEPTION = -2
};
int ActualProgram(int argc, char** argv) {
...
}
int main(int argc, char** argv) {
try {
return ActualProgram(argc, argv);
}
catch(const std::exception& ex) {
fprintf(stderr, "ERROR: %s\n", ex.what());
return PROGRAM_EXIT_EXCEPTION;
}
catch(...) {
fprintf(stderr, "UNKNOWN ERROR.\n");
return PROGRAM_EXIT_EXCEPTION;
}
}
Besides that, if you develop for Windows using Visual Studio, there is another, parallel system of throwing and catching exceptions, called Structured Exception Handling (SEH). It allows you to handle “system” errors that are not C++ exceptions and would otherwise terminate your program, even when using code shown above. This kind of error can be memory access violation (using null or incorrect pointer) or integer division by zero, among others. To catch them, you can use the following code:
#include <Windows.h>
int main(int argc, char** argv) {
__try {
return main2(argc, argv);
}
__except(EXCEPTION_EXECUTE_HANDLER) {
fprintf(stderr, "STRUCTURED EXCEPTION: 0x%08X.\n",
GetExceptionCode());
return PROGRAM_EXIT_EXCEPTION;
}
}
Few additional notes are needed here. First, SEH __try
-__except
section cannot exist in one function with C++ try
-catch
. It is fine, though, to call a function doing one way of error handling from a function doing the other one. Their order is important – C++ exceptions are also caught by SEH __except
, but SEH exceptions are not caught by C++ catch
. So, to do it properly, you need to make your main
function doing SEH __try
-__except
, which calls some main2
function doing C++ try
-catch
, which calls ActualProgram
– not the other way around.
If you wonder what are the process exit codes returned by default when exceptions are not caught, the answer can be found in the documentation of GetExceptionCode macro and Windows header files. When memory access violation occurs, this function (or the entire process, if SEH exceptions are not handled) returns -1073741819 = 0xC0000005, which matches EXCEPTION_ACCESS_VIOLATION
. When a C++ exception is thrown, the code is -1073740791 = 0xC0000409, which is not one of EXCEPTION_
symbols, but I found it defined as STATUS_STACK_BUFFER_OVERRUN
(strange…). Maybe it would be a good idea to extend the __except
section shown above to decode known exception codes and print their string description.
Finally, you need to know that integer division by zero throws a SEH exception, but floating-point division by zero does not – at least not by default. There is EXCEPTION_FLT_DIVIDE_BY_ZERO
and EXCEPTION_INT_DIVIDE_BY_ZERO
error code defined, but the default behavior of incorrect floating-point calculations (e.g. division by zero, logarithm of a negative value) is to return special values like Not a Number (NaN) or infinity and proceed with further calculations. This behavior can be changed, as described in “Floating-Point Exceptions”.
Comments | #windows #visual studio #c++ Share
# On Debug, Release, and Other Project Configurations
Sun
17
May 2020
Foreword: I was going to write a post about #pragma optimize
in Visual Studio, which I learned recently, but later I decided to describe the whole topic more broadly. As a result, this blog post can be useful or inspiring to every programmer coding in C++ or even in other languages, although I give examples based on just C++ as used in Microsoft Visual Studio on Windows.
When we compile a program, we often need to choose one of possible "configurations". Visual Studio creates two of those for a new C++ project, called "Debug" and "Release". As their names imply, the first one is mostly intended to be used during development and debugging, while the other should be used to generate the final binary of the program to be distributed to the external users. But there is more to it. Each of these configurations actually sets multiple parameters of the project and you can change them. You can also define your custom configurations and have over 2 of them, which can be very useful, as you will see later.
First, let's think about the specific settings that are defined by a project configuration. They can be divided into two broad categories. First one is all the parameters that control the compiler and linker. The difference between Debug and Release is mostly regarding optimizations. Debug configuration is all about having the optimizations disabled (which allows full debugging functionality and also makes the compilation time short), while Release has the optimizations enabled (which obviously makes the program run faster). For example, Visual Studio sets these options in Release:
Visual Studio also inserts an additional code in Debug configuration to fill memory with some bit pattern that helps with debugging low-level memory access errors, which are plaguing C and C++ programmers. For example, seeing 0xCCCCCCCC in the debugger usually means uninitialized memory on the stack, 0xCDCDCDCD - allocated but uninitialized memory on the heap, and 0xFEEEFEEE - memory that was already freed and should no longer be used. In Release, memory under such incorrectly used pointers will just hold its previous data.
The second category of things controlled by project configurations are specific features inside the code. In case of C and C++ these are usually enabled and disabled using preprocessor macros, like #ifdef
, #if
. Such macros can not only be defined inside the code using #define
, but also passed from the outside, among the parameters of the compiler, and so they can be set and changed depending on the project configuration.
The features controlled by such macros can be very diverse. Probably the most canonical example is the standard assert
macro (or your custom equivalent), which we define to some error logging, instruction to break into the debugger, or even complete program termination in Debug config, and to an empty macro in Release. In case of C++ in Visual Studio, the macro defined in Debug is _DEBUG
, in Release - NDEBUG
, and depending on the latter, standard macro assert
is doing "something" or is just ignored.
There are more possibilities. Depending on these standard pre-defined macros or your custom ones, you can cut out different functionalities from the code. One example is any instrumentation that lets you analyze and profile its execution (like calls to Tracy). You probably don't want it in the final client build. Same with detailed logging functionality, any hidden developer setting or cheat codes (in case of games). On the other hand, you may want to include in the final build something that's not needed during development, like checking user's license, some anti-piracy or anti-cheat protection, and generation of certificates needed for the program to work on non-developer machines.
As you can see, there are many options to consider. Sometimes it can make sense to have over 2 project configurations. Probably the most common case is a need for a "Profile" configuration that allows to measure the performance accurately - has all the compiler optimizations enabled, but still keeps the instrumentation needed for profiling in the code. Another idea would be to wrap the super low level, frequently called checks like (index < size())
inside vector::operator[]
into some separate macro called HEAVY_ASSERT
and have some configuration called "SuperDebug" that we know works very slowly, but has all those checks enabled. On the other end, remember that the "FinalFinal" configuration that you will use to generate the final binary for the users should be build and tested in your Continuous Integration during development, not only one week before the release date. Bugs that occur in only one configuration and not in the others are not uncommon!
Some bugs just don't happen in Debug, e.g. due to uninitialized memory containing consistent 0xCCCCCCCC instead of garbage data, or a race condition between threads not occurring because of a different time it takes to execute certain functions. In some projects, the Debug configuration works so slowly that it's not even possible to test the program on a real, large data set in this configuration. I consider it a bad coding practice and I think it shouldn't happen, but it happens quite often, especially when STL is used, where every reference to myVector[i]
element in the unoptimized code is a function call with a range check instead of just pointer dereferecence. In any case, sometimes we need to investigate bugs occurring in Release configuration. Not all hope is lost then, because in Visual Studio the debugger still works, just not as reliably as in Debug. Because of optimizations made by the compiler, the instruction pointer (yellow arrow) may jump across the code inconsistently, and some variables may be impossible to preview.
Here comes the trick that inspired me to write this whole blog post. I recently learned that there is this custom Microsoft preprocessor macro:
#pragma optimize("", off)
that if you put at the beginning of a .cpp file or just before your function of interest, disables all the compiler optimizations from this point until the end of the file, making its debugging nice and smooth, while the rest of the program behaves as before. (See also its documentation.) A nice trick!
Comments | #c++ #visual studio Share
# Two most obscure bugs in my life
Wed
14
Nov 2018
Fixing bugs is a significant part of software development, and a very emotional one - from frustration when you have no idea what is happening to euphoria when you find the cause and you know how to fix it. There are many different kinds of bugs and reasons why they are introduced to the code. (Conducting a deep investigation of how each bug happened might be an interesting idea - I will write a separate blog post about it someday...) But the most frustrating ones are probably these that occur only in some specific conditions, like only on one of many supported platforms or only in "Release" and not in "Debug" configuration. Below you will find description of the two most obscure bugs I've found and fixed in my life, for your education and amusement :) Both were in large, multiplatform, C++ codebases.
1. There was this bug reported causing incorrect program behavior, occurring on only one of many platforms where the program was built and used. It was probably Mac, but I can't remember exactly. It was classified as a regression - it used to work before and stopped working, caught by automated tests, after some specific code change. The strange thing was that the culprit submit to the code repository introduced only new comments and nothing else. How can change in a code comment introduce a new bug?!
It turned out that the author of this change wanted to draw a nice "ASCII art" in his comment, so he's put something like this:
// HERE STARTS AN IMPORTANT SECTION //==================================================================\\ Function1(); Function2(); (...)
Have you noticed anything suspicious?...
A backslash '\' at the end of line in C/C++ means that the logical line is continued to next line. This feature is used mostly when writing complex preprocessor macros. But a code starting from two slashes '//' begins a comment that spans until the end of line. Now the question is: Does the single-line comment span to next line as well? In other words: Is the first function call commented out, or not?
I don't know and I don't really care that much about what would "language lawyers" read from the C++ specification and define as a proper behavior. The real situation was that compilers used on some platforms considered the single-line comment to span to the next line, thus commenting out call to Function1()
, while others didn't. That caused the bug to occur on some platforms only.
The solution was obviously to change the comment not to contain backslash at the end of line, even at the expense of aesthetics and symmetry of this little piece of art :)
2. I was assigned a "stack overflow" bug occurring only in "Debug" and not in "Release" configuration of a Visual Studio project. At first, I was pretty sure it would be easy to find. After all, "stack overflow" usually means infinite recursion, right? The stack is the piece of memory where local function variables are allocated, along with return addresses from nested function calls. They tend not to be too big, while the stack is 1 MB by default, so there must be unreasonable call depth involved to hit that error.
It turned out not to be true. After few debugging sessions and reading the code involved, I understood that there was no infinite recursion there. It was a traversal of a tree structure, but the depth of its hierarchy was not large enough to be of a concern. It took me a while to realize that the stack was bloated to the extent that exceeded its capacity by one function. It was a very long function - you know, this kind of function that you can see in corporate environment which defies any good practices, but no one feels responsible to refactor. It must have grown over time with more and more code added gradually until it reached many hundreds of lines. It was just one big switch with lots of code in each case.
void Function(Option op) { switch(op) { case OPTION_FIRST: // Lots of code, local variables, and stuff... break; case OPTION_SECOND: // Lots of code, local variables, and stuff... break; case OPTION_THIRD: // Lots of code, local variables, and stuff... break; ... } }
What really caused the bug was the number and size of local variables used in this function. Each of the cases involved many variables, some big like fixed-size arrays or objects of some classes, defined by value on the stack. It was enough to call this function recursively just few times to exhaust the stack capacity.
Why "stack overflow" occurred in "Debug" configuration only and not in "Release"? Apparently Visual Studio compiler can lazily allocate or alias local variables used in different cases of a switch instruction, while "Debug" configuration has all the optimizations disabled and so it allocates all these variables with every function call.
The solution was just to refactor this long function with a switch - to place the code from each case in a separate, new function.
What are the most obscure bugs you've met in your coding practice? Let me know in the comments below.
Comments | #debugger #visual studio #c++ Share
# Ported CommonLib to 64 Bits
Sun
12
Nov 2017
I recently updated my CommonLib library to support 64-bit code. This is a change that I planned for a very long time. I written this code so many years ago that 64-bit compilation on Windows was not popular back then. Today we have opposite situation - a modern Windows application or game could be 64-bit only. I decided to make the library working in both configurations. The way I did it was:
The most important and also most frequent change I needed to make in the code was to use size_t
type for every size (like the number of bytes), index (for indexing arrays), count (number of elements in some collection), length (e.g. number of characters in a string) etc. This is the "native" type intended for that purpose and as such it's 32-bit or 64-bit, depending on configuration. This is also the type returned by sizeof
operator and functions like strlen
. Before making this change, I carelessly mixed size_t
with uint32
.
I sometimes use maximum available value (all ones) as a special value, e.g. to communicate "not found". Before the change I used UINT_MAX
constant for that. Now I use SIZE_MAX
.
I know there is a group of developers who advocate for using signed integers for indices and counts, but I'm among the proponents of unsigned types, just like C++ standard recommends. In those very rare situations where I need a signed size, correct solution is to use ptrdiff_t
type. This is the case where I apply the concept of "stride", which I blogged about here (Polish) and here.
I have a hierarchy of "Stream" classes that represent a stream of bytes that can be read from or written to. Derived classes offer unified interface for a buffer in memory, a file and many other. Before the change, all the offsets and sizes in my streams were 32-bit. I needed to make a decision how to change them. My first idea was to use size_t
. It seems natural choice for MemoryStream
, but it doesn't make sense for FileStream
, as files can always exceed 4 GB size, regardless of 32/64-bit configuration of my code. Such files were not supported correctly by this class. I eventually decided to always use 64-bit types for stream size and cursor position.
There are cases where my usage of size_t
in the library interface can't work down to the implementation, because underlying API supports only 32-bit sizes. That's the case with zlib (a data compression library that I wrapped into stream classes in files ZlibUtils.*), as well as BSTR (string type used by Windows COM that I wrapped into a class in files BstrString.*). In those cases I just put asserts in the code checking whether actual size doesn't exceed 32-bit maximum before downcasting. I know it's not the safe solution - I should report error there (throw an exception), but well... That's only my personal code anyway :)
BstrString::BstrString(const wchar_t *wcs, size_t wcsLen) { (...) assert(wcsLen < (size_t)UINT_MAX); Bstr = SysAllocStringLen(wcs, (UINT)wcsLen);
From other issues with 64-bit compatibility, the only one was following macro:
/// Assert that ALWAYS breaks the program when false (in debugger - hits a breakpoint, without debugger - crashes the program). #define ASSERT_INT3(x) if ((x) == 0) { __asm { int 3 } }
The problem is that Visual Studio offers inline assembler in 32-bit code only. I needed to use __debugbreak
intrinsic function from intrin.h
header instead.
Overall porting of this library went quite fast and smoothly, without any major issues. Now I just need to port the rest of my code to 64 bits.
Comments | #visual studio #c++ #commonlib Share
# New Version of PVS-Studio
Thu
09
Nov 2017
PVS-Studio is particularly good at finding issues with code portability between 32-bit and 64-bit. Out of my personal projects, I already ported CommonLib to 64 bits, and RegScript2 is written to support 64 bits from the start, but porting my main app (music visualization program) to 64 bits is a large task that I still have on my TODO list. Even if I know how to write portable code (use size_t not int etc. :) I made first commits to this repository 8 years ago, when my programming knowledge was much smaller, so I'm sure there are many nasty bugs there. Making it working as 64-bit app will be a difficult task and I'm sure PVS-Studio will help me with that. I will share my experiences and conclusions when I eventually do it.
In the meantime, I recommend to check their Blog, where developers of this tool share many valuable information. They also maintain list of articles describing errors they found in open source projects.
Comments | #tools #c++ #pvs-studio #visual studio Share
# Visual C++: IntelliSense Versus Macros
Thu
17
Aug 2017
When you code in C++ using Visual Studio, you may meet following problem: Your code uses preprocessor directives that depend on some macro that is defined elsewhere, e.g. in one of CPP files including the header file you write, and so IntelliSense gets lost and stops working, or even completely grays out that part of your code as inactive. For example:
// Some code...
#ifdef EXTERNALLY_DEFINED_MACRO
// Some code where IntelliSense stops working...
#endif
I just found a solution to that. It turns out there is a special macro predefined when code is processed by Visual Studio IntelliSense. It's called just __INTELLISENSE__
. By using it, you can change parts of your code as seen by IntelliSense parser, e.g. define some macros, without influencing logic seen by the compiler. For example:
#ifdef __INTELLISENSE__
#define EXTERNALLY_DEFINED_MACRO
#endif
// Some code...
#ifdef EXTERNALLY_DEFINED_MACRO
// Some more code where IntelliSense is working again...
#endif
Comments | #c++ #visual studio Share
# Microsoft Visual Studio 2017 - My Experience
Thu
30
Mar 2017
Visual Studio 2017 came out recently. The list of news looks like it has been written by some marketing rather than technial guys. It starts with "Unparalleled productivity for any dev, any app, and any platform. Use Visual Studio 2017 to develop apps for Android, iOS, Windows, Linux, web, and cloud. Code fast, debug and diagnose with ease, test often, and release with confidence. You can also extend and customize Visual Studio by building your own extensions. Use version control, be agile, and collaborate efficiently with this new release!" - I've never seen so many buzzwords in just one paragraph.
Rest of the page is not different. They even call their installer "a new setup experience". They've also introduced "Lightweight Solution load", which is disabled by default - like everyone is assumed to prefer slower option :) Some other changes: "Visual Studio starts faster, is more responsive, and uses less memory than before." - that's unexpected direction. "Performance improvement: basic_string::operator== now checks the string's size before comparing the strings' contents." - wow, that's genius! They should file a patent for that ;) I hope they do the same for std::vector and other STL containers.
OK, but jokes aside, I've installed it on my personal PC, it installed quite fast and it works good. It preserved my settings, like the list of Include and Library Directories. Upgrade of my home projects went smoothly, without any problems.
There are many changes valuable for native code developers. What's New for Visual C++ in Visual Studio 2017 page mentions over 250 bug fixes, other compiler improvements and improved support for C++11, 14, and 17. I've already heard stories of programs running much faster after recompilation with this new compiler.
Contrary to what I thought before, Microsoft didn't abandon Graphics Diagnostics embedded into MSVS after they released new standalone PIX. They've actually added some new features to it.
So I definitely recommend upgrading to Visual Studio 2017. It is IMHO the best C++ IDE, and the new version is just next step in the right direction.
It seems that there is no new version of "Microsoft Visual C++ Redistributable Package" this time. Programs compiled with VS 2017 use VCRUNTIME140.DLL, just like in 2015 version.