One of the things I'd like to have for a revised OpenGLContext/scenegraph API is a nice, efficient, friendly, mechanism for processing buffer data. I currently tend to follow VRML97's OpenGL 1.1-style array model, which is very dated these days. Each component of a vertex is separated out into position, normal, and textureCoordinate arrays and the drawing operation indexes into those arrays in lock-step. Modern OpenGL (shaders) pretty much work best when you have "interleaved" data-types for your vertices, that is, you pack (position, normal, textureCoordinate1, textureCoordinate2, someOtherValue) into a single VBO and then just use offsets into the VBO for the actual rendering.
The rendering loop (the part most likely to be coded in C/Cython/C++ eventually) doesn't really have to "deal with" the arrays other than as opaque blobs, as it is the shader which interprets what is inside them. So, only the "client" side needs to model them. Numpy structured data-types should provide a very nice way to do the modelling:
should create a VBO compatible friendly interface to N data-points, so that a['position'] is an N*3 array of 3-float vectors and a['texCoord'] is an N*2 array of 2-float vectors, while a['position']['x'] is a simple linear array of x coordinates. At that point I can just stop worrying about array representations. I can use numpy's fast array manipulation routines (which I already do in OpenGLContext).
Another approach would be to use a custom-coded C-ish extension providing just the basics of a Vertex object (with configurable fields), some dot and cross product operations and some other basic math... I would control the API, sure, but that doesn't really convince me it would be worthwhile. Similarly, I could pick up a 3D math-focused library and use that, but then I'm still using someone else' API, so why not use the "standard" python one.
Pingbacks are closed.