Evening Learning; I was hoping to get something done...
Last update on .
Documentation for how to setup the mapping isn't particularly helpful about best practice for splitting existing apps across two databases. I spent quite some time playing with multiple DeclarativeBase classes, and eventually just settled on model.init_model( *engines ) creating a mapping from Declarative classes and Tables to the connection which should be used for the table by using introspection on the classes themselves. Once you have the mapping, you pass the mapping into DBSession.configure( binds = your_mapping ).
Spent quite a bit of time trying to get a many-to-many relationship working across the barrier. It would silently "succeed" when I added objects to the relationship, but the objects would be gone when accessed a few seconds later (this test passed on both the single-database version and the properties-and-queries multi-database version).
I'm guessing part of the problem was that the binds weren't passed into the metadata.create_all() call, which was causing the full schema to be created on each connection (since fixed, though I haven't re-tested the M2M because I ripped it out already). It took me quite a bit of time to realize that the secondary table in the M2M relation was actually being populated in the "main" instead of the "stats" DB (I'd overlooked Tables and only mapped Declarative sub-classes in my model init).
So, as of now, I've got a single session, all the tables are mapped, and everything seems to think it is running fine. Thing is, none of the other cross-db relations (even the One to Many) seem to actually be doing anything either, that is, they raise no errors, but the collections they should be retrieving are all empty (this is only the case for cross-db relations). I'm guessing this is going to be a similar problem to the M2M one, but I should have been in bed hours ago, so I won't figure it out tonight.
It is dangerous to play with magic when you have other things to get done. I think I'll abandon this experiment (for this project) and go back to my stupid-simple properties-and-queries solution for the cross-DB connections. Playing with magic can wait for some day when I can afford to spelunk through the tomes of lore at leisure.
Comments are closed.
Pingbacks are closed.
mike bayer on 10/26/2011 9:39 a.m. #
There's no "magic" here. What SQLA offers is at the session level - you apply multiple engines to the Session: http://www.sqlalchemy.org/docs/orm/session.html#vertical-partitioning . It then queries one or the other database based on the class it is instructed to load - that's it ! It seems like what you're looking for is a lot *more* magic - because here, there is none.
Mike Fletcher on 10/26/2011 4:41 p.m. #
To be clear:
class X( DeclarativeBase ):
class Y( DeclarativeBase ):
x = relation( x_id, backref='yes' )
x = X()
If that doesn't work, what's the point? It *almost* seems to work, but the records seem to disappear. I expect that it is something messed up with the DBSession transaction management (TurboGears uses the Zope transaction module and won't accept the two-phase-commit flag), or something similarly low level.
If this kind of operation (i.e. basic ORM operation) really isn't supposed to work; well, that's not all that useful for me. Magic here is referring to something complex that is supposed to "just work", every suitably advanced library is indistinguishable from magic.
mike bayer on 10/27/2011 9:40 a.m. #
The basic idea of the above code works perfectly in 0.5 and 0.7. Here is a fully written test including an assertion: http://pastebin.com/HeN6Gsqq
As for why your specific case silently fails I can't say as I don't have your actual code to test (the code above would not even compile correctly or be accepted by SQLAlchemy). Using echo=True with an engine is a key strategy, however. It will show you exactly what's going on (or not). When using 0.6 or 0.7 you can send "logging_name='somename'" to create_engine() to distinguish between which engine is being used for a particular statement.
As I've alluded to previously, relying upon TG and Zope extensions is a quick way to create mysterious, buried issues, as you're relying on someone else's configuration/implementation/quirks/documentation to configure things correctly. Using SQLAlchemy directly IMHO should always be preferred.
One of the key facets of the SQLAlchemy project is that if you write to our mailing list, you will in 99% of the cases get an answer to your question within a few hours, usually by me personally. In the future please send an email when you're having configurational difficulties !
Mike C. Fletcher on 10/27/2011 11:54 a.m. #
Point taken regarding asking for help on the mailing list.
This *is* a TurboGears project, so ripping out its (built-in) reliance on Zope transaction and TurboGears isn't likely to fly (in point of fact, I've already reverted to the TurboGears-documented approach and completed all the changes the user wanted to make).
If TurboGears' usage/configuration of SQLAlchemy is deprecated/unsupportable then that probably needs to be addressed at the TurboGears level. As this is my last running TG project and it is being retired in approximately 6 months, I don't have much motivation to work that through.
mike bayer on 10/29/2011 2:47 p.m. #
re: motivation to solve TG's issues - me neither ! that's why I put out these general waves of "use SQLAlchemy directly if the TG stuff is once again out of date/broken/buggy/nobody knows how it works". In the case of when you have to use it, getting it to work in a more rudimental form as I did with my test case is a great way to figure things out as you only need to figure out what's different between the working and non-working configurations to target the issue.