MRI Rendering

Warning! Some information on this page is older than 5 years now. I keep it for reference, but it probably doesn't reflect my current knowledge and beliefs.

Mon
14
Feb 2011

Having injured arm is not a good thing, but for this reason I had MRI (Magnetic Resonance Imaging) and got results on a CD, so I've decided to try to render them in 3D as soon as I could move my arm a bit. Here is the final result:

I went to my parents' house and took only my 5-year-old laptop with me, which has ATI Radeon Xpress 200M GPU with Shader Model 2. Still I wanted to see how much of a GPU-accelerated volumetric rendering I could do with it, especially as I always wanted to play around with such stuff and tried only volumetric fog rendering before.

The examination results I got on the CD are saved in some special medical file format called DICOM. The specification of this format seem to be freely available, but I didn't have to deal with it - the format turned out to be readable by my favourite image browser - XnView. Files in a single subdirectory in DICOM\STU001 contain grayscale 2D images forming layers (slices) of a 3D scan. In my case, it was for example 20 images, each having 256 x 256 resolution. I converted them all to PNG format with XnView to be able to load them in my code with just D3DX functions.

On the CD with te results there was also a Windows application attached for browsing these images in both 2D and 3D. It's called "eFilm Lite" and it looks very powerful. But this did not discourage me to write my own :)

I started my code with a simple Direct3D 9 framework and AntTweakBar integration. Then I loaded all slices from PNG files into a texture with D3DPOOL_SCRATCH and stored the data in single 3D unsigned char[] array. My first try was to render these points using point sprites, pushed to the GPU every frame, slice-by-slice, through D3DUSAGE_DYNAMIC vertex buffer.

It worked and looked OK, but the look mainly depends on how do I map the values from the array to color and alpha of the rendered particles. For alpha I just used pow(value, some_tweakable_exponent) with some visibility threshold on the result. As for the color, I first tried to map the values to hue in HSB color space:

Then I thought that the more sophisticated mapping I saw in the eFilm Lite application must be there for a reason so I "borrowed" this mapping from what can be seen in an aproppriate dialog window:

Into a table in CSV file:

"Value","AARRGGBB"
0,00000000
22,00000000
25,556a3831
37,40ab5a4f
40,40217b76
59,2036c7bf
60,20b8be46
124,55e7e8bf
125,00000000

Which I then load in my program, interpolate, convert into a lookup table and use to map values of the 3D image to color and alpha for rendering. The result (still using point sprites) looks like this. By the way, I also added clip plane in this version - a plane that can be freely rotated and moved to reveal interal parts of the image.

I was also eager to develop my own value -> RGBA mapping, so I even wrote code for exporing histogram of the values from the array, with antialiased lines and everything. Example histogram looks like this. The one on the top is linear, the second one is logarithmic and the third shows difference between the second and a straight sloping line. Still I had no idea how to think of some reasonable color and alpha values so it turned out to be dead end.

Second big part of my project (and a branch in my Git repository) was a completely new rendering method, as described in a great article Volume Rendering Techniques (Milan Ikits, Joe Kniss, Aaron Lefohn, Charles Hansen, Chapter 39 of "GPU Gems" book). I uploaded all the 3D table to the GPU once, as a volume texture in D3DFMT_L8 format. I then render several polygonal slices rotating to always be perpedicular to viewing direction. Each slice samples the volume texture for a value, then does a dependant lookup from a lookup texture to convert this value to RGBA - color and transparency. That's what is rendered on the screen, back-to-front, in premultiplied alpha mode. The result looks like this:

Unfortunately my GPU seems not too support linear filtering of volume textures, despite I declared MinFilter = LINEAR; MagFilter = LINEAR; in my shader and the DirectX Caps Viewer says "Yes" to D3DPTFILTERCAPS_MINFLINEAR and D3DPTFILTERCAPS_MAGFLINEAR in VolumeTextureFilterCaps group. I tried to do many different things in my pixel shader - sample texture in 8 places and perform linear fliltering on my own, as well as sample several places along view direction and do some form of numerical integration. Even better quality could be obtained by looking-up ARGB from value at every volume texture sample and interpolate these values (of course in linear not gamma space) instead of the single MRI value before the lookup. There is one big problem though: Pixel Shader 2.0 supports only 32 sampling and 64 arithmetic instructions - far too little to allow for that all. I really pushed this GPU to its limits. My laptop even hung once and another time a blue screen of death appeared. On my friend's laptop with NVIDIA 300M graphics the program ran several times faster. Oh well... finally I ended with pixel shader that performs linear filtering along Z axis only (because that's where I have the worst resolution), takes 3 such samples along view direction and integrates them using Simpson method: val = (val1 + 4*val2 + val3) / 6. That gives 6 sampling instructions.

I continued to play with coloring and it reminded me of an article by Gooch et al. in subject of NPR (Non-Photorealistic Rendering) - A Non-Photorealistic Lighting Model For Automatic Technical Illustration. Simply speaking, it says that the image looks more clear when the shading goes from cold to warm colors (like from blue to yellow). It corresponds to what I've read in an article I've stumpled upon recently: Color Wheels are wrong? How color vision actually works, which says that although human vision has three basic colors - RGB, it then does R - G and (R+G) - B difference. So I've changed my CSV file to shade my image from blue to yellow and I liked the result. By the way, I restored clipping plane functionality in the new renderer and this time I did it using real user-defined clipping plane functionality of Direct3D 9.

Finally there was a very tempting extension to my rendering technique available which I found in GPU Gems article mentioned above - shading. But how to shade a volumetric image, where each place in the 3D space has some color and some transparency? Where to get normals from? The article advises to calculate gradients of the value stored in the 3D array. That's exactly what I did and it really works. For each voxel in the volume texture, I calculated 3D gradient as the difference between adjacent voxels:

cell[x, y, z].gradient.xyz = (
    cell[x+1, y, z].value - cell[x-1, y, z].value,
    cell[x, y+1, z].value - cell[x, y-1, z].value,
    cell[x, y, z+1].value - cell[x, y, z-1].value);

I changed my volume texture to D3DFMT_A8R8G8B8, stored old value in alpha component and gradient in RGB (scaled and biased). Pixel shader could now sample all RGBA components, rescale gradient to (-1..1, -1..1, -1..1), scale it by some tweakable factor and use as a normal vector in standard Lambert lighting equation. Here is the result:

Of course much more can be done in this subject and I only touched the fascinating world of volumetric rendering here. I could, for example, do some form of volumetric shadow mapping to make the lighting more realistic and possibly calculate some screen-space ambient occlusion.

Source Code, update from 2011-11-23: As I received multiple requests for disclosing source code of this project, here it is: Volume_render.zip (618 KB). It contains source code of the program, written in C++ using DirectX 9 and Shader Model 2, some base libraries, shader source in HLSL, as well as complete Git repository with history of my changes. Enjoy :)

Comments | #directx #medicine #rendering Share

Comments

[Download] [Dropbox] [pub] [Mirror] [Privacy policy]
Copyright © 2004-2019