cayenne-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From John Huss <>
Subject Re: Handling database creation and migration
Date Wed, 21 Dec 2011 20:22:33 GMT
On Wed, Dec 21, 2011 at 2:02 PM, Andrus Adamchik <>wrote:

> I am checking my migrations archive, and "Cayenne-friendly" schema
> alterations or simple INSERT/UPDATE/DELETE make up probably 50% of those.
> The rest are things like these:
> (1)
> LOAD DATA LOCAL INFILE './patches-5.0/some.csv' INTO TABLE some_table
> '\n' (C1,C2,C3);
> (2)
> SET unique_checks=0;
> ALTER TABLE t1 DROP INDEX keyword_index;
> ALTER TABLE t1 DROP INDEX headline_index;
> SET unique_checks=1;
> (3)
> set @AAA = (select max(ID) from;
> insert into (ID, NAME, DESCRIPTION, URL)
> values((@AAA+1), 's1', 's2', 's3');
> So how would those be handled if we are to go this route?

There is a built-in Migration subclass called SqlFileMigration that makes
it easy to run a raw SQL script.  It uses filename conventions to locate
SQL scripts.  For version zero of the DataMap named 'MyMap' using
Postgresql it would look for a file named 'MyMap0-Postgres.sql' and use
that, or if not found it would look for a generic one with the
name 'MyMap0.sql'.

If you need to run commands from more than one file or you need more
complicated logic for choosing a script, you can just call
executeSqlScript(filename).  Or if you just need to run a single command
you can call executeSqlStatement(sql).

So you have the ability to mix the raw SQL script execution with the API,
which I often do.  I'll subclass SqlFileMigration and use the API to add
new tables etc, and then put things like index creation in a corresponding
SQL file that will be run after the API calls when it calls super.upgrade.

  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message