There's not much to this post, just a simple class to help out with OpenGL selection. I've used it in my PhD project to implement a basic 3D paint utility. If you have an existing class for rendering scenes, you can extend that to include basic object selection by deriving from the OpenGLSelection class and implementing the DrawSelection method, using the OpenGL name stack. Selection can then be performed using the RenderSelection method, which takes the current mouse coordinates and returns the number of potentially selected objects along with a list of their IDs.
An example of usage is included in the header, use the code for whatever you'd like.
Selection.h
syntax-highlighter
Saturday, March 6, 2010
Saturday, February 6, 2010
An Updateable Priority Queue
There are a number of algorithms which use priority queues to schedule operations but which need to be able to update the priorities of items in the queue as operations are performed. Two examples are the Fast-Marching Method and Mesh Simplification algorithms, which greedily remove the lowest cost item from a queue, perform an operation and then update the costs of geometrically nearby items. Although the STL specification provides priority queue and heap data structures, it is not straightforward to efficiently update priorities of items that are already in the queue.
One option is to use a std::multiset and augment the stored data with a priority field. Min(), Insert(), Delete() and Update() can then be performed in logarithmic time, which is pretty reasonable in most cases. Updating the position of a value in the queue then consists of removing the item from the queue, updating the priority value and then adding the item back to the queue. This works fine in practice, but has some drawbacks. First you need to ensure that the priority field is never changed when the data is actually in the queue, i) since this would destroy the ordering of the queue and ii) probably generate a run-time error in the stl::multiset implementation. Secondly if you want to use the same data in multiple queues, you need multiple priority fields and/or multiple comparison predicates which cause code bloat and an increased memory footprint even when you're not using the queues.
Another option (and the one taken here) is to stitch together two std::maps. One map provides an ordered mapping from priorities to data and the other provides the reverse mapping. The basic sequence of operations of removing an item, updating its priority and then reinserting it remains, except that since the maps own their own data, there is no concern of a user modifying the priorities arbitrarily. Also, when the queue is no longer needed, the additional memory required for its operation is freed and no extra data fields are littering up the data structures. The one condition is that both the priorities AND queued values must have an ordering. If you're storing data as pointers this comes for free, but if you're storing data by value in the queue then you will need to define a comparison predicate. However, this predicate can be reused among ALL priority queues.
I've called the class a mutable_priority_queue, and it shares similarities with both std::map and std::priority_queue. In order to ensure that the mappings don't become polluted, access to the stored data is quite restricted. The primary methods that can be used are push/insert, pop/delete and update. In order traversal is also possible using iterators, however the values stored in the queue cannot be modified through these iterators.
Operations are logarithmic in the number of stored items and about 2X slower than std::map on its own (not surprising since two maps are used). So if your application performs a reasonable amount of processing outside of queue operations it should be more than sufficient.
You can get the code for the queue and a slightly more complex example below:
mutable_priority_queue.h
priority_queue_example.cpp
One option is to use a std::multiset and augment the stored data with a priority field. Min(), Insert(), Delete() and Update() can then be performed in logarithmic time, which is pretty reasonable in most cases. Updating the position of a value in the queue then consists of removing the item from the queue, updating the priority value and then adding the item back to the queue. This works fine in practice, but has some drawbacks. First you need to ensure that the priority field is never changed when the data is actually in the queue, i) since this would destroy the ordering of the queue and ii) probably generate a run-time error in the stl::multiset implementation. Secondly if you want to use the same data in multiple queues, you need multiple priority fields and/or multiple comparison predicates which cause code bloat and an increased memory footprint even when you're not using the queues.
Another option (and the one taken here) is to stitch together two std::maps. One map provides an ordered mapping from priorities to data and the other provides the reverse mapping. The basic sequence of operations of removing an item, updating its priority and then reinserting it remains, except that since the maps own their own data, there is no concern of a user modifying the priorities arbitrarily. Also, when the queue is no longer needed, the additional memory required for its operation is freed and no extra data fields are littering up the data structures. The one condition is that both the priorities AND queued values must have an ordering. If you're storing data as pointers this comes for free, but if you're storing data by value in the queue then you will need to define a comparison predicate. However, this predicate can be reused among ALL priority queues.
I've called the class a mutable_priority_queue, and it shares similarities with both std::map and std::priority_queue. In order to ensure that the mappings don't become polluted, access to the stored data is quite restricted. The primary methods that can be used are push/insert, pop/delete and update. In order traversal is also possible using iterators, however the values stored in the queue cannot be modified through these iterators.
#include<iostream> #include"mutable_priority_queue.h" int main( int argc, char **argv ){ mutable_priority_queue<float,int> A; mutable_priority_queue<float,int>::iterator i; A.insert( 1, 0.0f ); A.insert( 2, 2.0f ); A.insert( 3, -1.0f ); std::cout << "A:" << std::endl; for( i=A.begin(); i!=A.end(); i++ ){ std::cout << "key: " << i->first << ", value: " << i->second << std::endl; } A.update( 2, -5.0f ); std::cout << "A:" << std::endl; for( i=A.begin(); i!=A.end(); i++ ){ std::cout << "key: " << i->first << ", value: " << i->second << std::endl; } return 0; }
Operations are logarithmic in the number of stored items and about 2X slower than std::map on its own (not surprising since two maps are used). So if your application performs a reasonable amount of processing outside of queue operations it should be more than sufficient.
You can get the code for the queue and a slightly more complex example below:
mutable_priority_queue.h
priority_queue_example.cpp
Sunday, January 31, 2010
Cross-platform Code Profiling
In order to guide optimization efforts, it is often necessary to determine how long individual blocks of code take to execute. Although there are a number of packages available that do this, I have found them to generally i) be quite expensive, ii) be inconvenient to use and iii) generate huge output files. This is not to say these packages don't have a place, but simply that sometimes you don't need all their features and so a more compact approach would be useful.
The classes presented here are simply included in you project and compiled in. Blocks are instrumented manually with pre-processor macros that initialize and register timers. Timing is performed with QueryPerformanceCounter on Windows and gettimeofday on Linux/Mac, giving an approximate lower bound on the intervals that can be resolved of about 1 microsecond. If you need more resolution than this, you'll have to use something else. Statistics can then be dumped to any std::ostream, allowing you to print profiling statistics to a file, the console, log window or whatever. The only statistics collected are the call count for each block, average time per call and total elapse time in the block.
Here is an example of how to use them:
Here are the source files. You can use them under a BSD license. If you find them useful I'd appreciate if you let me know.
Profiler.h
Profiler.cpp
The classes presented here are simply included in you project and compiled in. Blocks are instrumented manually with pre-processor macros that initialize and register timers. Timing is performed with QueryPerformanceCounter on Windows and gettimeofday on Linux/Mac, giving an approximate lower bound on the intervals that can be resolved of about 1 microsecond. If you need more resolution than this, you'll have to use something else. Statistics can then be dumped to any std::ostream, allowing you to print profiling statistics to a file, the console, log window or whatever. The only statistics collected are the call count for each block, average time per call and total elapse time in the block.
Here is an example of how to use them:
#include<iostream> // comment this out to turn off profiling // (and avoid its associated cost) #define USE_PROFILING #include"Profiler.h" int main( int argc, char **argv ){ PROFILER_START( main_timer ); // do stuff here PROFILER_END( main_timer ); ProfileManager::Get()->PrintStatistics( std::cout ); return 0; }
Here are the source files. You can use them under a BSD license. If you find them useful I'd appreciate if you let me know.
Profiler.h
Profiler.cpp
Labels:
code,
graphics,
scientific computing,
software
Portable Dynamic Libraries
It's often useful to encapsulate components of a program in dynamic libraries. These allow you you make minor changes to a module without rebuilding the whole program and swap out components at runtime. This post shows how to do this from the command-line on Windows, Linux and Mac platforms for a simple example. I am assuming that explicit linking is being used; in my opinion the point of using dynamic libraries is to not need things like import libraries and the like...
To start with, there needs to be a module to make a dynamic library from. I'm going to work from the following, trivial function (assumed to be saved as module.cpp)
In order to build this as a dynamic library some boiler-plate needs to be added. First of all the whole function needs be wrapped in an extern "C" block. This prevents C++ name mangling from changing the name of the function in the module. Secondly, under Windows the function has to be declared with __declspec(dllexport) in order to be accessible from the library. This is unnecessary on Linux and Mac machines, so this declaration is only applied if building on Windows machines. With these changes, tmp.cpp becomes:
With the module source handled, the library has to be built. The command-lines to do this differ depending on what platform and compiler you're using:
Windows, Visual C++
vcvarsall.bat (or vcvars32.bat)
cl.exe -o module.dll module.cpp /link /DLL
Windows, MinGW/ Linux, GCC
g++ -fPIC -shared module.cpp -o module.dll (or module.so)
Mac, GCC
g++ -dynamiclib tmp.cpp -o module.so
Note that for non-trivial examples you would have to specify additional source files and include/library paths.
At this point, there should be a dynamic library named module.dll (Windows) or module.so (Linux/Mac). To actually use this library there are minor differences between Windows and Linux/Mac. The following header (DLL.h) handles these variations allowing a common interface under all three platforms:
This provides functions to load and free dynamic libraries as well as retrieve function pointers from dynamic libraries. An example of its usage for the simple library defined above is:
Windows, Visual C++
vcvarsall.bat (or vcvars32.bat)
cl.exe -o test.exe main.cpp /link
Windows, MinGW /Mac GCC
g++ -o test main.cpp (test.exe on Windows)
Linux, GCC
g++ -o test -ldl main.cpp
Running the compiled executable should print the following string "Result: 5".
Some Notes:
1. I have found that repeatedly loading and unloading DLLs that were compiled with MinGW under Windows causes a crash. This appears to be isolated to MinGW, since the same programs run fine when compiled with Visual C++, or on Linux and Mac systems with GCC.
2. It is possible to build a dynamic library with one compiler for use with an application built by another compiler. This could be useful for allowing users to write their own plugins for a proprietary system. However for this to work properly, it seems that the runtime libraries used by both library and executable must be the same (if platform specific code is called) and any objects passed between the library and executable must be binary compatible. This frequently means that things like stl containers cannot be passed, since different compilers will likely use different implementations of stl.
To start with, there needs to be a module to make a dynamic library from. I'm going to work from the following, trivial function (assumed to be saved as module.cpp)
// module.cpp int ModuleFunction( void ){ return 5; }
In order to build this as a dynamic library some boiler-plate needs to be added. First of all the whole function needs be wrapped in an extern "C" block. This prevents C++ name mangling from changing the name of the function in the module. Secondly, under Windows the function has to be declared with __declspec(dllexport) in order to be accessible from the library. This is unnecessary on Linux and Mac machines, so this declaration is only applied if building on Windows machines. With these changes, tmp.cpp becomes:
// module.cpp #ifdef WIN32 #define EXPORT __declspec(dllexport) #else #define EXPORT #endif extern "C" { EXPORT int ModuleFunction( void ){ return 5; } }
With the module source handled, the library has to be built. The command-lines to do this differ depending on what platform and compiler you're using:
Windows, Visual C++
vcvarsall.bat (or vcvars32.bat)
cl.exe -o module.dll module.cpp /link /DLL
Windows, MinGW/ Linux, GCC
g++ -fPIC -shared module.cpp -o module.dll (or module.so)
Mac, GCC
g++ -dynamiclib tmp.cpp -o module.so
Note that for non-trivial examples you would have to specify additional source files and include/library paths.
At this point, there should be a dynamic library named module.dll (Windows) or module.so (Linux/Mac). To actually use this library there are minor differences between Windows and Linux/Mac. The following header (DLL.h) handles these variations allowing a common interface under all three platforms:
// DLL.h #ifndef DLL_H #define DLL_H #ifdef _WIN32 #include<windows.h> #define DLLPtr HMODULE #define LoadDLL(a) LoadLibraryA( a ) #define FreeDLL(a) FreeLibrary( a ) #else #include<dlfcn.h> #define DLLPtr void* #define LoadDLL(a) dlopen( a, RTLD_LAZY ) #define FreeDLL(a) dlclose( a ) #endif template<typename Func> __inline Func GetFunc( DLLPtr dll, const char *symbol ){ #ifdef _WIN32 return (Func)GetProcAddress( dll, symbol ); #else return (Func)dlsym( dll, symbol ); #endif } #endif
This provides functions to load and free dynamic libraries as well as retrieve function pointers from dynamic libraries. An example of its usage for the simple library defined above is:
// main.cpp int main( int argc, char **argv ){ DLLPtr lib = LoadDLL( "blah_0.dll" ); if( !lib ){ // handle error; return 0; } int (*F)(void) = GetFunc<int(*)(void)>( lib, "blah" ); if( !F ){ // handle error return 0; } std::cout << "Result: " << F() << std::endl; FreeDLL(lib); return 0; }
Windows, Visual C++
vcvarsall.bat (or vcvars32.bat)
cl.exe -o test.exe main.cpp /link
Windows, MinGW /Mac GCC
g++ -o test main.cpp (test.exe on Windows)
Linux, GCC
g++ -o test -ldl main.cpp
Running the compiled executable should print the following string "Result: 5".
Some Notes:
1. I have found that repeatedly loading and unloading DLLs that were compiled with MinGW under Windows causes a crash. This appears to be isolated to MinGW, since the same programs run fine when compiled with Visual C++, or on Linux and Mac systems with GCC.
2. It is possible to build a dynamic library with one compiler for use with an application built by another compiler. This could be useful for allowing users to write their own plugins for a proprietary system. However for this to work properly, it seems that the runtime libraries used by both library and executable must be the same (if platform specific code is called) and any objects passed between the library and executable must be binary compatible. This frequently means that things like stl containers cannot be passed, since different compilers will likely use different implementations of stl.
Subscribe to:
Posts (Atom)