Last weekend I was showing my music visualizations on two parties. First one was in a club, projected on flat screen, while the second one was an open-air party (my 7th and last one in this summer season :) with the image projected on trees. For this one I had to prepare something different - simple, contrast shapes and single color only, so it could be clearly visible. Here is a small video:
It doesn't look like this because I wanted it exactly this way or because that was my "artistic vision", but just because showing some rotating images downloaded from the Internet and blending transformed feedback from previous frame was the easiest way to start with something interestingly looking.
Now I have tons of ideas to improve this program as soon as I find some free time. Next to some small technical tasks like refactoring code or simply adding new graphical effects, I plan following big TODO-s (with no particular order decided yet):
Video playback. Being able to show video streams instead of still images would be great, but is also hard to implement. I plan to use FFmpeg library for this purpose. There are other options as well. I'm afraid of performace of uploading each decoded frame from CPU to GPU, so maybe using Microsoft Media Foundation/DXVA (it seems to work with Direct3D 11) or DirectShow would be better, but on the other hand, codecs for various file formats could be a problem, while FFmpeg has its own codecs.
A second window with GUI to be displayed on laptop's screen, because right now I have only standard Windows console with output done using printf() and input taken from console commands (which is not a convenient way to control the program). Next to some buttons and other controls, an important feature would be to display in this window a small copy of the final image presented on the projector (so the operator doesn't need to turn around to see it) and some technical parameters shown in real-time (like current FPS, audio UV meter).
Music analysis. Right now I have some basic audio input implemented using Windows Multimedia/WaveIn. I want to switch to Core Audio APIs/WASAPI. Currently I only measure peak amplitude and visualize it on this shape in the center, but I could extract much more information from the audio input (perform FFT, detect beat), control more parameters with it (like alpha transparency of some image layers) and try to automatically synchronize visuals to the beat.
Some framework for parameters of different types (bools, ints, floats, vectors, strings, sub-containers etc.), where each one can be serialized to/from a file and exposed to GUI for editing. I guess all progammers love to implement this :) I also did it multiple times already, but this time it will be the ultimate, universal, all-encompassing solution ;) I want parameters to have constant value, to be animated in time according to some curve or to be evaluated from a function depending on some variables, like the data from music input. This way I could use the framework to describe elements of a scene (parameters like position, size, orientation, colors etc.). I already have such system implemented to some degree, but it definitely could be improved.
Support for some input device more convenient than laptop touchpad and keyboard, like a MIDI controller (or a smartphone or tablet simulating one) or... a Wiimote :)
Asynchronous loading of resources, like textures, in the background, on separate thread. Because I know one day all my resources won't fit into GPU memory and I will have to load them on the fly to prepare next scene while showing current one.
Video capture from a camera (or better camera with depth, like Kinect), plus maybe some processing with computer vision algorithms (like in OpenCV library).