## Planning to use Numpy Structured Data Types Written by Mike on Feb. 16, 2012 in Snaking.

One of the things I'd like to have for a revised OpenGLContext/scenegraph API is a nice, efficient, friendly, mechanism for processing buffer data.  I currently tend to follow VRML97's OpenGL 1.1-style array model, which is very dated these days. Each component of a vertex is separated out into position, normal, and textureCoordinate arrays and the drawing operation indexes into those arrays in lock-step.  Modern OpenGL (shaders) pretty much work best when you have "interleaved" data-types for your vertices, that is, you pack (position, normal, textureCoordinate1, textureCoordinate2, someOtherValue) into a single VBO and then just use offsets into the VBO for the actual rendering.

The rendering loop (the part most likely to be coded in C/Cython/C++ eventually) doesn't really have to "deal with" the arrays other than as opaque blobs, as it is the shader which interprets what is inside them.  So, only the "client" side needs to model them.  Numpy structured data-types should provide a very nice way to do the modelling:

`dtype=[    ('position', [('x','1f'),('y','1f'),('z','1f')]),    ('normal', [('x','1f'),('y','1f'),('z','1f')]),    ('texCoord',[('s','1f'),('t','1f')])]`

should create a VBO compatible friendly interface to N data-points, so that a['position'] is an N*3 array of 3-float vectors and a['texCoord'] is an N*2 array of 2-float vectors, while a['position']['x'] is a simple linear array of x coordinates.  At that point I can just stop worrying about array representations.  I can use numpy's fast array manipulation routines (which I already do in OpenGLContext).

The only real downside (assuming it all works as expected) IMO is the large dependency (does anyone care about this anymore), and having the numpy implementation detail "leak" into my code-base (mostly an annoyance when I look at porting to Javascript, and really, one way or another there needs to be an interface, that interface *will* leak into the code using it).

Another approach would be to use a custom-coded C-ish extension providing just the basics of a Vertex object (with configurable fields), some dot and cross product operations and some other basic math... I would control the API, sure, but that doesn't really convince me it would be worthwhile.  Similarly, I could pick up a 3D math-focused library and use that, but then I'm still using someone else' API, so why not use the "standard" python one.

1. Peter Shinners on 02/11/2012 1:19 p.m. #

I feel your pain. I really wish the Numpy array got rolled into the standard libraries. Alas...

2. Claudio Canepa on 02/11/2012 2:55 p.m. #

numpy looks good:
well maintained
you can ensure the memory layout openGL wants
people more akin about to learn a mainstream library than a relatively least used 3D library
a custom code extension will suck more time from your 'time allocated for open source'

3. Malcolm Tredinnick on 02/11/2012 5:24 p.m. #

I think the numpy requirement will be fine. It's a well-packaged module, so not particularly onerous for people to just download and install, regardless of their particular platform of choice. Your logic about just wanting to use an established solution makes complete sense here; it's very solid code.

4. Casey Duncan on 03/01/2012 11:14 a.m. #

numpy is great, but the big downside is inevitably packaging the final product. Assuming this is used for games or graphical apps, a nice double-clickable executable is greatly desired. Numpy has its own build mechanism that makes this painful.

That said having gone down the road of implementing my own geometry primitives, you will spend tons of time trying to reimplement something that even begins to approach the richness of numpy's api, even just for basic arrays. Doing it today I would probably focus on fixing numpy's build to be friendly for packaging. Not sexy work, but probably a better use of energy.

### Pingbacks

Pingbacks are closed.