Database testing and deployment

From CitconWiki
Jump to navigationJump to search
The printable version is no longer supported and may have rendering errors. Please update your browser bookmarks and please use the default browser print function instead.

Scenario: different clients with different versions of the applications deployed onsite, you have to be able to upgrade from any version to the most recent one.

  • store the full DDL of the tables in subversion
  • create a script based solution to diff two following revisions of the DDL and generate the alteration script from them (each db knows which version they currently are). They didn't have the mechanism implemented backwards, because they didn't have a need for it, but should be straightforward
  • for backout, they had the original database file left around, so rollback would mean just to re-mount the original one (they've used oracle)

single prod deployment scenario: keep the full DDL in version control like above, but in addition maintain a pair of sql scripts (release/rollback), which can convert the database between the prod and the new version (including data migrations). Concern about the double bookkeeping (e.g.: a changed store procedure was stored twice in this strategy), which certainly could be improved. Maybe this could be improved by having just the names of the changed stored procedures in a shell script, that would pick up the procedure ddl from the proper sourvce folder, while maintaining the other alter statements (add/drop column for instance) within the turnover scripts.

There was a question about how to deal with no downtime changes, followed by a detour to using replication to propagate the schema/data changes over a farm, so deployment only needs to be applied to the master node.

To speedily discover API incompatibilities one way could be to use database metadata driven code generation (essentially, for each procedure generate a java/c# method and a correspomnding arguments class, and use those objects from inside the code to access the database), which would highlight such discrepancies at compile time on the CI server.

If possible, try to have a database per developer. Also, useful if there is some smaller dataset that you can use for functional testing.

There was a question about intelligent characteristic test data generation to provide coverage in legacy systems, similar to the way agitar's product works in the java space (i.e.: finding all the decision points in the sproc, and generating data to excercise all paths through the proc), but noone was aware of any such tool (though visual studio 2010 was mentioned as something that might have similar features).

There was an emphasis on functional testing, and how dbfit can be used to make testing easy for data warehouse developers (not writing asserts), using excel like tables to describe the input data and the expected results for that data in fit. Here came up the question of tooling and the fragility of such tests (e.g.: what if you split a table into two? all tests break?), which was followed by a "general acceptance testing best practices apply to database acceptance testing as well" discussion.

There was a brief discussion about data driven testing (e.g.: frameworks where you define form elements (labels, validators, etc.) in the database), and the suggestion was to use a faster database for these tests (e.g.: in memory, sqllite, etc.) to speed them up, plus test the framework itself separately.

It was noted that when making changes, DDL changes should come first, followed by DML ones to avoid problems.

Some tools mentioned were sqlunit, dbfit liquid base, and dbdeploy