![]() ![]() Though I tend to favor not actually dropping / recreating and instead running the tests within a transaction that's rolled back at the end as it's much more efficient, especially on backends like Oracle, Postgresql, MSSQL where creates/drops are more expensive. It is typical that unit tests do drop all those tables in between test suites, and recreate them for another test suite. Throughout all of this, we are *not* talking about the tables and schema that are in the actual database. Breaking it apart has little use unless you're testing the mechanics of the mapping itself. SQLAlchemy docs stress the Declarative pattern very much these days as we're really trying to get it across that the composition of class, table metadata, and mapping is best regarded as an atomic structure - it exists only as that composite, or not at all. There's virtually no reason in normal applications against a fixed schema to tear down mappings and table metadata between tests. Unit tests in an outside world application would normally be against a schema that's an integral part of the application, and doesn't change with regards to classes. Like MetaData.remove(), there's almost no real world use case for clear_mappers() except that of the SQLAlchemy unit tests themselves, or tests of other ORM-integration layers like Elixir, which are testing the ORM itself with various kinds of mappings against the same set of classes. The use case for removing individual mappers is not supported as there is no support for doing all the reverse bookkeeping of removing relationships(), backrefs, and inheritance structures, and there's really no need for such a feature. If you want to remove mappings, you can call clear_mappers(). In reality the MetaData.remove() method is mostly useless, except that a create_all() will no longer hit that Table, foreign key references will no longer find it, and you can replace it with a new Table object of the same name, but again nothing to do with the ORM and nothing to do with the state of that removed Table, which still points to that MetaData and will otherwise function normally. Removing a Table from a particular MetaData has almost no effect as all the ORM mappings still point to it. The Table and Metadata objects are part of Core and know absolutely nothing about the ORM or mappings. OK I think you're mixing concepts up here, a backref is an ORM concept. > name on a relationship already existing. > tearDown ans setUp in between), I still get an error about a backref > generate the same tables in two separate test methods (with a call to > This time I am also removing the tables from the metadata, but if i ![]() > I was wrong, the method emptied the database, but I was checking the Is there a way to achieve this or am i missing something? Know how to reinitialize it with the classes/tables hard coded in the I have also considered creating new metadata in setUp, but i would not The first i just want to remove the records (or regenerate the tablesįrom the existing metadata), while the others i want to remove Others are generated on the flight (from the class definitions). Some tables (with class definitions) are defined in modules, while The DropEverything function) and all tables except from the excludesįrom the metadata (with metadata.remove(table) ). With tools.drop_tables removing all tables from the database (using Tools.drop_tables(tadata, engine, excludes) That somehow the metadata retains the class definitions. In separate runs the error does not occur and from that i conclude TearDown ans setUp in between), I still get an error about a backref Generate the same tables in two separate test methods (with a call to This time I am also removing the tables from the metadata, but if i I was wrong, the method emptied the database, but I was checking the
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |