Entries for tag "libraries", ordered from most recent. Entry count: 42.
# Using Google Test in Visual C++ 2012
Sun
02
Mar 2014
I was recently learning Google Test (Google C++ Testing Framework). It's a C++ library from Google (shared under new BSD license) to do unit testing, following xUnit convention.
As always with any library in C++, it's not that easy as just download the sources, compile and use it. Here are my experiences with making it work in Visual C++ 2012:
1. Visual Studio solution is already prepared in msvc/gtest.sln. You just need to confirm upgrading to new version and ignore report with warnings.
2. You only need "gtest" project. It compiles a static library. That's the way I decided to use it. Alternatively you could just include all library sources to your project.
3. The library uses tuple from new C++ that requires variadic templates. Visual C++ doesn't support this feature, so to make it compiling without errors, you need to globally define following macros (in project properties > C/C++ > Preprocessor > Preprocessor Definitions, in both Debug and Release configurations, in both library and your client project):
GTEST_USE_OWN_TR1_TUPLE=0
_VARIADIC_MAX=10
4. If your project uses different way of linking to the standard library than gtest, you will get linker errors like:
1>gtestd.lib(gtest-all.obj) : error LNK2038: mismatch detected for 'RuntimeLibrary': value 'MTd_StaticDebug' doesn't match value 'MDd_DynamicDebug' in MyTestApp.obj
1>msvcprtd.lib(MSVCP110D.dll) : error LNK2005: "public: __thiscall std::_Container_base12::_Container_base12(void)" (??0_Container_base12@std@@QAE@XZ) already defined in gtestd.lib(gtest-all.obj)
1>msvcprtd.lib(MSVCP110D.dll) : error LNK2005: "public: __thiscall std::_Container_base12::~_Container_base12(void)" (??1_Container_base12@std@@QAE@XZ) already defined in gtestd.lib(gtest-all.obj)
...
That's probable as default setting for new project is to link it dynamically, while gtest project links it statically. It should be same in both gtest and your project. To change it to linking dynamically, enter project properties > C/C++ > Code Generation > Runtime Library and choose Multi-threaded Debug DLL (for Debug) and Multi-threaded DLL (for Release).
After setting it up, usage of the library is very easy. You just need in your project:
1. Add "include" subdirectory to include directories.
2. #include <gtest/gtest.h>
3. Link with "msvc/gtest/Debug/gtestd.lib" (in Debug) and "msvc/gtest/Release/gtest.lib" (in Release).
4. Write your tests, like:
TEST(IntegerTest, Addition)
{
EXPECT_EQ(4, 2 + 2);
EXPECT_EQ(10, 3 + 7);
}
5. Write main function:
int main(int argc, char** argv)
{
testing::InitGoogleTest(&argc, argv);
return RUN_ALL_TESTS();
}
I like this library - it's easy to learn and use while quite powerful. It's also well documented - the main documentation is: Primer, Advanced Guide, FAQ.
There is also a GUI application available, called Guitar (Google Unit Test Application Runner) or gtest-gbar, which allows running your testing application and browsing results in a window instead of console.
Comments | #testing #libraries #c++ Share
# Building JSON Parser Library for C++
Wed
29
Sep 2010
C++ is a flawed language in many aspects, but one of its biggest problems is how difficult it is to start using some C or C++ library. Because of the lack of binary compatibility, separate version have to be prepared for each operating system, compiler and compiler version. Often a version for my compiler is not accessible (or no binary distributions are accessible at all) and I have to compile the library from source code, which always causes problems.
I recently wanted to start using JSON as a configuration file format. It's a nice description language that would make a good compromise between XML (which has lots of redundacy, is unreadable and I generally dislike it) and TokDoc (my custom description language). In search for a JSON parser library for C++, I've decided to use json-cpp. And here the story begins...
Json-cpp is distributed only as source code. README file says that I have to use Scons build system to compile it. WTF is this? How many f*** build systems like this are out there?! Well, I've downloaded it. It looks like it uses and requires Python. I have Python 2.6 installed - it should be OK. Scons Windows setup installed it in the Python directory - OK. What now?
Json-cpp readme says about Scons: "Unzip it in the directory where you found this README file. scons.py Should be at the same level as README.". Unzip?! OK, so maybe I should download the ZIP distribution of Scons instead of setup. I did it, enter the ZIP archive and... There is no scons.py file there! Only scons.1 and some other mysterious files. The only Python script in the main directory is setup.py. So desipte what json-cpp README says, scons cannot be just unzipped, it has to be setup somehow. ARGH!
Luckily this time the solution turned out to be easy. There is a Visual C++ solution file in build\vs71 subdirectory. I managed to successfully convert it to Visual C++ 2008 version and compile it to static LIB files. I only had to ensure that the Runtime Library in Project Options is set to same type as in all my projects, that is the DLL version.
Comments | #c++ #libraries Share
# Parallelizing Algorithms with Intel TBB and C++ Lambdas
Fri
27
Aug 2010
My demo for RiverWash is practically finished. I still could polish it or even make some big changes, because I know it's not great, but that's another story. What I want to write about today is how easily an algorithm can be changed to run in parallel on multicore processors when you use Intel Threading Building Blocks and C++ lambdas.
First, here is an algorithm. In one of my graphic effects I fill a 256x512 texture on CPU every frame. For each pixel I calculate a color based on some input data, which are constant during this operation. So the code looks like this:
void SaveToTexture(const D3DLOCKED_RECT &lockedRect)
{
uint x, y;
char *rowPtr = (char*)lockedRect.pBits;
for (y = 0; y < TEXTURE_SIZEY; ++y)
{
XMCOLOR *pixelPtr = (XMCOLOR*)rowPtr;
for (x = 0; x < TEXTURE_SIZEX; ++x)
{
*pixelPtr = CalcColorForPixel(x, y);
++pixelPtr;
}
rowPtr += lockedRect.Pitch;
}
}
How to parallelize such loop? First, some theoretical background. Intel TBB is a free C++ library for high-level parallel programming. It has nice interface that makes extensive use of C++ language features but is very clean and simple. It provides many useful classes, from different kinds of mutexes and atomic operations, through thread-safe, scalable containers and memory allocators, till sophisticated task scheduler. But for my problem it was sufficient to use simple parallel_for function that utilizes the task scheduler internally. To start using TBB, I've just had to download and unpack this library, add appropriate paths as Include and Library directories in my Visual C++ and add this code:
#include <tbb/tbb.h>
#ifdef _DEBUG
#pragma comment(lib, "tbb_debug.lib")
#else
#pragma comment(lib, "tbb.lib")
#endif
Second topic I want to cover here are lambdas - new, great language feature from C++0x standard, available since Visual C++ 2010. Lambdas are simply unnamed functions defined inline inside some code. What's so great about them is they can capture the context of the caller. Selected variables can be passed by value or by reference, as well as this pointer or even "everything". It makes them ideal replacement for ugly functors that had to be used in C++ before.
Summing it all together, parallelized version of my algorithm is not much more complicated than the serial version:
void SaveToTexture(const D3DLOCKED_RECT &lockedRect)
{
tbb::parallel_for(
tbb::blocked_range<uint>(0, TEXTURE_SIZEY),
[this, &lockedRect](const tbb::blocked_range<uint> &range)
{
uint x, y;
char *rowPtr = (char*)lockedRect.pBits + lockedRect.Pitch * range.begin();
for (y = range.begin(); y != range.end(); ++y)
{
XMCOLOR *pixelPtr = (XMCOLOR*)rowPtr;
for (x = 0; x < TEXTURE_SIZEX; ++x)
{
*pixelPtr = CalcColorForPixel(x, y);
++pixelPtr;
}
rowPtr += lockedRect.Pitch;
}
} );
}
This simple change made all my 4 CPU cores busy for 90+% and gave almost 4x speedup in terms of frame time, which is good result. So as you can see, coding parallel applications is not necessarily difficult :)
Comments | #libraries #rendering #c++ Share
# Music Analysis - Spectrogram
Wed
14
Jul 2010
I've started learning about sound analysis. I have some deficiencies in education when it comes to digital signal processing (greetings for the professor who taught this subject at our university ;) but Wikipedia comes to the rescue. As a starting point, here is a spectrogram I've made from one of my recent favourite songs: Sarge Devant feat. Emma Hewitt - Take Me With You.
Now I'm going to exaplain in details how I've done this by showing some C++ code. First I had to figure out how to decode an MP3, OGG or other compressed sound format. FMOD is my favourite sound library and I knew it can play many file formats. It took me some time though to find functions for fast decoding uncompressed PCM data from a song without actually playing it for all 3 minutes. I've found on the FMOD forum that Sound::seekData and Sound::readData can do the job. Finally I've finished with this code (all code shown here is stripped from error checking which I actually do everywhere):
Comments | #math #libraries #music #rendering #dsp #algorithms Share
# Smarty Cheatsheet
Fri
04
Jun 2010
I knew about this before, but today I've found some time to deeply study Smarty - a free template engine for PHP. Templates in webdev (not to confuse with templates in C++) look like an interesting concept, because they separate PHP code from HTML code. I wish I used them from the start during coding my homepage and the www.gamedev.pl website instead of print()-ing HTML tags directly...
Anyway, I've written a small Smarty Cheatseet. Of course it makes sense only after studying the original library documentation, it cannot replace it.
By the way, I was also looking for some PHP library for creating RSS or ATOM feeds and I've found Feedcreator. Now I'm going to make something useful with it...
NEW: This "something" is an RSS feed for newest screenshots uploaded into www.gamedev.pl website: Warsztat - Screeny, as well as for projects: Warsztat - Projekty.
Comments | #webdev #php #libraries Share
# Some Thoughts about Library Design
Sun
16
May 2010
Much has been said about designing good user interface, whether for desktop applications, websites or games. There are whole books available about GUI design and even this year's IGK-7'2010 conference featured two lectures about the interface in games. But what about interfaces for programmers?
I can't find much about the rules of good library API design and I believe there is much to say in this subject. I only know The Little Manual of API Design written by Jasmin Blanchette from Trolltech/Nokia, one of the creators of Qt library (thanks for the link Przemek!). There is also a blog entry about Math Library, which is quite interesting. Inspired by it, I've came up with a general thought that you cannot have all the following features when designing a library and its API, you have to choose 2 or 3 of them and make some compromise:
I think some patterns and best practices as well as some anti-patterns could be found when you look at interfaces of many libraries. Maybe I'll post some more of my thoughts on this subject in the future. In the meantime, do you know any other resources about API design?
Comments | #philosophy #software engineering #libraries Share
# LZMA SDK - How to Use
Sat
08
May 2010
What do you think about when I tell a word "compression"? If you currently study computer science, you probably think about some details of algorithms like RLE, Huffman coding or Burrows-Wheeler transform. If not, then you surely associate compression with archive file formats such as ZIP and RAR. But there is something in between - a kind of libraries that let you compress some data - implement a compression algorithm but do not provide ready file format to pack multiple files and directories into one archive. Such library is useful in gamedev for creating VFS (Virtual File System). Probably the most popular one is zlib - a free C library that implements Deflate algorithm. I've recently discovered another one - LZMA. Its SDK is also free (public domain) and the basic version is a small C library (C++, C# and Java API-s are also available, as well as some additional tools). The library uses LZMA algorithm (Lempel–Ziv–Markov chain algorithm, same as in 7z archive format), which has better compression ratio than Deflate AFAIK. So I've decided to start using it. Here is what I've learned:
If you decide to use only the C API, it's enough to add some C and H files to your project - the ones from LZMASDK\C directory (without subdirectories). Alternatively you can compile them as a static library.
There is a little bit of theory behind the LZMA SDK API. First, the term props means a 5-byte header where the library stores some settings. It must be saved with compressed data to be given to the library before decompression.
Next, the dictionary size. It is the size of a temporary memory block used during compression and decompression. Dictionary size can be set during compression and is then saved inside props. Library uses dictionary of same size during decompression. Default dictionary size is 16 MB so IMHO it's worth changing, especially as I haven't noticed any meaninful drop in compression rate when set it to 64 KB.
And finally, end mark can be saved at the end of compressed data. You can use it so decompression code can determine the end of data. Alternatively you can decide not to use the end mark, but you must remember the exact size of uncompressed data somewhere. I prefer the second method, because remembering data size takes only 4 bytes (for the 4 GB limit) and can be useful anyway, while compressed data finished with end mark are about 6 bytes longer than without it.
Compressing full block of data with single call is simple. You can find appropriate functions in LzmaLib.h header. Here is how you can compress a vector of bytes using LzmaCompress function:
Comments | #commonlib #libraries #algorithms Share
# My Impressions about SQLite
Tue
20
Apr 2010
SQLite is a very strange library. It's a database engine that can store lots of data in a relational database and exposes API based on SQL language. On the other hand though, it's not a huge application that has to be installed in the system, work in the background and you have to connect to it via network interface, like MySQL or PostreSQL do. It's actually a lightweight library written in C that can be linked with your program and uses specified file as the database. It's fascinating that such a small library (there is even a preprocessed source version as a single 3.75 megabyte C file!) supports much of the SQL language.
The API of the SQLite library is similar to any other SQL-based database access API for any programming language. I've played with it a bit and here is my small sample code:
#include <sqlite3.h> int main() { sqlite3 *db; sqlite3_open_v2("D:\\tmp\\test.db", &db, SQLITE_OPEN_READWRITE | SQLITE_OPEN_CREATE, NULL); sqlite3_exec(db, "begin", NULL, NULL, NULL); sqlite3_stmt *stmt; sqlite3_prepare_v2(db, "insert into table1 (id) values (?)", -1, &stmt, NULL); for (int i = 0; i < 10; i++) { sqlite3_reset(stmt); sqlite3_bind_int(stmt, 1, i); sqlite3_step(stmt); } sqlite3_exec(db, "commit", NULL, NULL, NULL); sqlite3_finalize(stmt); sqlite3_close(db); }
It's hard for me to think of any application for such a strange library. It offers too much when you just want to design your file format and too few if you need a fully-featured database, like for a web server. So why did I decide to get to know this library? It's all because of an interesting gamedev tool called Echo Chamber. It's a free data mining program that can visualize data from SQLite database files as many different kinds of plots, even 3D ones. So when you integrate logging some numeric data from your engine into an SQLite database you can easily do performance analysis with it.