Playing with the OBJ file loader for OpenGLContext this evening (briefly). Forgot to check in a number of fixes on Friday (oops). Anyway, as of now CVS OpenGLContext with CVS PyOpenGL can load OBJs from the internet via the (now somewhat inappropriately named) bin/vrml_view.py script. Still haven't found any samples where there's .MTL files (and/or textures) connected to the files. OpenGLContext should be able to process those links, but the functionality is still untested.
Anyway, with a little magnolia object that gets compiled to ~3000 vertex triangle arrays my workstation (with a rather outdated GeForce 7600 GS) easily renders at 100fps (capped render rate for OpenGLContext) with around 10% CPU usage (using VBO support). Obviously textures, colours and the like would alter the speed, but it doesn't seem slow enough to worry about it at this point.
As mentioned, I'd like to get something put together that's GPGPU-ish so that I can be sure we're really supporting the GPGPU operations. If you have PyOpenGL code that you feel provides a great sample of what people want to do with GPGPU in Python, let me know.
Things I'm curious about:
- would it be useful to create an array wrapper object that tries to translate your numpy-like operations into a recipe to be executed on the GPU, or would you rather write the GL-level code yourself to have complete control? Or is it that both approaches would be needed?
- would you want to handle "streaming" data for very large data-sets yourself, or have the system choose the largest available window and stream the data automatically? What kind of feedback do you need for this kind of thing, what kind of resumability is important (if at all)?
- would you rather have the system kick out a low-level recipe to run on dozens of machines, or work interactively to let you play with the numbers, or both?
- would you want the results to be computed as-needed, or to explicitly trigger their generation with a command?
- would you want to integrate the operations with the iPython cluster operations, or are those generally different types of operation? i.e. would you need to scatter/gather your data-sets across many machines? If so, how automatic would you need it to be?