Regression on Lists/Tuples

Just discovered an annoying regression in the converters for lists/tuples, namely they will core when you have the accelerator module installed.  They don't core without the accelerator, but that seems to just be dumb luck.  Basically when you pass in a list-of-whatevers to PyOpenGL we have to convert it to something C-friendly, so we want to do this:

final_value = asArray( your_list )
pointer = dataPointer( final_value )
baseFunction( ..., pointer )

but to make that work, you need to store final_value during the call to baseFunction.  Which isn't happening with the current code.  Worse, the code doesn't, at the point of the from_param call have access to the data-type that's trying to be accessed (it's called from low-level ctypes code)... will need to alter the protocol to support it.

The bug managed to get pretty far along before I noticed it because I never use lists-of-floats for parameters, I always use Numpy or Numeric or ctypes objects.  Anyway, will have to fix it ASAP.


  1. jdm

    jdm on 12/08/2008 6:39 a.m. #

    Please define the verb "to core" in the sense that you are using it, or provide a link.

  2. Mike Fletcher

    Mike Fletcher on 12/08/2008 8:10 a.m. #

    It's a short form for "dump core", see Wikipedia at for further description. In this case, it's caused by a memory access failure due to the C code not checking for whether the passed object is an array object (so it just pulls random data out of the list's structures). The real problem is higher-level, but that's what causes the core.

  3. jdm

    jdm on 12/08/2008 8:43 p.m. #

    I'm pored over many dumps of literal core memory, and have heard core dumps abbreviated as "dumps" but never as "cores" which to me is as illogically puzzling as calling them "memories." But logic is seldom a factor in how language evolves... thanks for the explanation.

  4. Mike Fletcher

    Mike Fletcher on 12/10/2008 11:15 p.m. #

    Have the regression fixed now, but the fix is going to slow things down significantly. Will likely need to create a few more (trivial) accelerator functions to get back that lost performance.

Comments are closed.


Pingbacks are closed.