Standard test fixture specifying database records? (Frustrating writing these things ad hoc all the time)


Trying to be a good little Agile/Extreme programmer, I've been writing a test suite as I work on the ping scanner. The "Virtual Plant" stuff is nice for modelling the whole network-level plant, that is, it makes the SNMP queries all seem to find modems and that kind of useful thing, but it doesn't help with modelling the PostgreSQL database that the code is querying to find each modem (which means that the code just winds up scanning the local copy of the main database, rather than a simplified and predictable version of it).

So, all you Agile peoples, what is the accepted best practice for doing unit tests that need a particular database configuration to be established and then eliminated after the test? Is it dumping the database to SQL and then reloading it (I hope not, because then I either need to do a lot of paring down to specify a sub-set of the database for each case, or deal with multi-hour testing cycles (it takes > an hour to load the database)).

The particular code being tested cannot use rollback(), btw, as it commits changes to the databases all through the processes (which is pretty much a requirement for database work in Twisted).

Comments

  1. x

    x on 06/01/2004 4:13 a.m. #


    I may be extreme, but alas not in an agile way. I recall Peter of PyGTA talking about this before... well perhaps not this specifically, but in general the lengths they had to go to to set up a simulated test environment for tests. But i'm sure you know this.<br />
    <br />
    Strangely foggy out this morning.

  2. Mike Fletcher

    Mike Fletcher on 06/01/2004 2:19 p.m. #


    Was pea soup last night too. Peter's responses weren't particularly helpful at the time, mostly because they weren't AFAIR dealing with heavily cross-linked SQL-style databases.

Comments are closed.

Pingbacks

Pingbacks are closed.