Other 90% of OBJ files still pending
Last update on .
Playing with the OBJ file loader for OpenGLContext this evening (briefly). Forgot to check in a number of fixes on Friday (oops). Anyway, as of now CVS OpenGLContext with CVS PyOpenGL can load OBJs from the internet via the (now somewhat inappropriately named) bin/vrml_view.py script. Still haven't found any samples where there's .MTL files (and/or textures) connected to the files. OpenGLContext should be able to process those links, but the functionality is still untested.
Anyway, with a little magnolia object that gets compiled to ~3000 vertex triangle arrays my workstation (with a rather outdated GeForce 7600 GS) easily renders at 100fps (capped render rate for OpenGLContext) with around 10% CPU usage (using VBO support). Obviously textures, colours and the like would alter the speed, but it doesn't seem slow enough to worry about it at this point.
As mentioned, I'd like to get something put together that's GPGPU-ish so that I can be sure we're really supporting the GPGPU operations. If you have PyOpenGL code that you feel provides a great sample of what people want to do with GPGPU in Python, let me know.
Things I'm curious about:
- would it be useful to create an array wrapper object that tries to translate your numpy-like operations into a recipe to be executed on the GPU, or would you rather write the GL-level code yourself to have complete control? Or is it that both approaches would be needed?
- would you want to handle "streaming" data for very large data-sets yourself, or have the system choose the largest available window and stream the data automatically? What kind of feedback do you need for this kind of thing, what kind of resumability is important (if at all)?
- would you rather have the system kick out a low-level recipe to run on dozens of machines, or work interactively to let you play with the numbers, or both?
- would you want the results to be computed as-needed, or to explicitly trigger their generation with a command?
- would you want to integrate the operations with the iPython cluster operations, or are those generally different types of operation? i.e. would you need to scatter/gather your data-sets across many machines? If so, how automatic would you need it to be?
Comments are closed.
Pingbacks are closed.
Foone on 06/30/2008 10:20 a.m. #
Would you like me to donate some OBJ+MTL+Texture files?
I wrote my own OBJ loader a while back and I've still got the test files for it.
Rene Dudfield on 06/30/2008 4:52 p.m. #
You can make obj files with materials with blender and wings easily.
It's nice to try models from different programs, so that you can see their quirks.
Last I tried cgkit was the only python module that could load the models I was using.
Another major performance problem is reusing textures. Say if one image is used on 10 different textures -- you don't want to upload 10 of those images into separate textures.
I think portable texture upload/download is kind of tricky code to get right for gpgpu uses. That's the minimum you need -- upload your data, do some calculations on it, then download it again. I have a little script that tries out the various methods and times them for you. You can get 10x speed difference or more depending on the card, driver used and method used.
I assume you have seen pygpu -- with your mention of numpy translation into GLSL primitives?
Mike Fletcher on 06/30/2008 9:17 p.m. #
If you want, just put them online, OpenGLContext is capable of downloading them from a web server. If not, sure, would be happy to have them.
Mike Fletcher on 07/01/2008 10:42 p.m. #
OpenGLContext should handle the textures properly in that respect, it creates a shared Appearance Node with a shared ImageTexture sub-node. That will create a single texture object in the GL.
Haven't looked at the PyGPU stuff, though it is now on my "should look into that" list.
Prashant Saxena on 07/03/2008 9:08 a.m. #
We are using PypenGL in custom tools we are developing for the studio pipeline. Right now we are designing a PyOpenGL based application to load .rib (Renderman) format and do all the lighting, texturing and rendering from it. One of the major concern is how big geometrical data can be handled in terms of viewport navigation and other stuff. We do have scenes from 1mb to 100 mb in size and it's pretty normal to have such amount of data nowdays. Commercial application like Max and Maya requires high end graphic cards to perform smooth viewport operations and other stuff. We did a small test of loading 3d data of 1mb using pyopengl and viewport operations are much better then Maya on a machine without professional graphic card. Let me know if you need a heavy duty geometry file for testing purpose?
Mike Fletcher on 07/03/2008 9:21 p.m. #
PyOpenGL should be fine with high-polygon count models. After all, it's just shunting the information into the GL.
OpenGLContext would likely choke on a 100MB file due to converting everything to the extremely generic VRML97 format then doing an enormous amount of processing to turn it into low-level primitives again.
If you've got complex scenegraphs, straightforward frustum culling would be the first step (given the level you're working at, though, I'd guess you've already got that). Lots of other ways to reduce the amount you need to render depending on what you need to get done.