Profiling OpenGLContext fun...

Profiling OpenGLContext on a relatively large VRML97 world.  Interesting stats (in OpenGL, ignoring OpenGLContext slow-points):

  • 3% of total runtime is in a function which just does; it seems the ctypes array sub-object is a Python instance that has multiple operations in its initializer and is recreated on each access (with many accesses per array per frame)
  • 9% of total runtime is in the wrapper call function (that is, the thing that replaces the old SWIG logic), though that 9% includes the C-level calls (the actual work) as well as the wrapper overhead
  • 16% of the total runtime is in OpenGL

Discovered a number of problems in the code-base while working on this; for instance, pydispatcher was calling str() on the im_self objects for methods in order to provide friendly debugging information... which was causing significant slowdowns when im_self was an array object (or something that included such an object in its str representation).  Oh, and glMaterial is getting called far too much, it's 3-4% of total runtime all by itself.  Should investigate material clumping so that equal-material siblings render together without re-setting-up the material parameters.

I'm to the point where I need to think of writing little C accelerator functions for PyOpenGL, it seems.  The parameterized "wrapper" call function and the numpy integration code are obvious first targets.  Also added support for "adding" integers to OpenGL.arrays.vbo.VBO instances to create parameters that can be passed as offsets into the array.

I also want to play with scenegraph restructuring and interleaved arrays for the data-sets (that is, change how OpenGLContext "compiles" its arrays for PyOpenGL).  That won't affect PyOpenGL, of course, but should produce better guidance for how to use PyOpenGL efficiently.

Looked for OpenGL 2.1/3.x compatible laptops as well, didn't find anything at a reasonable price with the features I'm looking for.  Closest seems to be a ThinkPad w500 with all the upgrades, but the Linux support doesn't seem to be there for the card.  Strangely, the machine appears to be available from a Linux certified laptop dealer, so it must be possible, just not easy.  There are $1700 machines that have slightly older nVidia cards (8000 series), but anything with a high-end card is well into stratospheric prices ($3000 or so).


  1. Paul Hildebrandt

    Paul Hildebrandt on 11/08/2008 12:48 a.m. #

    Do you have a PyOpenGL program that can benchmark the computers you are talking about? It would be interesting to try said program out on different computers.

  2. Mike Fletcher

    Mike Fletcher on 11/08/2008 8:46 p.m. #

    Don't currently have an effective benchmark that's easily redeployed. I'm just using OpenGLContext's script to look at larger VRML97 worlds (interactively).

  3. Mike Fletcher

    Mike Fletcher on 11/08/2008 9:25 p.m. #

    Another "sheesh" moment: 4.18% of runtime in warnings.warn() from use of a deprecated Numpy function...

Comments are closed.


Pingbacks are closed.