Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id 63251200BE4 for ; Tue, 6 Dec 2016 14:16:33 +0100 (CET) Received: by cust-asf.ponee.io (Postfix) id 61546160B1B; Tue, 6 Dec 2016 13:16:33 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id 49977160B2D for ; Tue, 6 Dec 2016 14:16:31 +0100 (CET) Received: (qmail 89918 invoked by uid 500); 6 Dec 2016 13:16:30 -0000 Mailing-List: contact commits-help@cayenne.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@cayenne.apache.org Delivered-To: mailing list commits@cayenne.apache.org Received: (qmail 89771 invoked by uid 99); 6 Dec 2016 13:16:29 -0000 Received: from git1-us-west.apache.org (HELO git1-us-west.apache.org) (140.211.11.23) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 06 Dec 2016 13:16:29 +0000 Received: by git1-us-west.apache.org (ASF Mail Server at git1-us-west.apache.org, from userid 33) id CE0C9F16A5; Tue, 6 Dec 2016 13:16:28 +0000 (UTC) Content-Type: text/plain; charset="us-ascii" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit From: skolbachev@apache.org To: commits@cayenne.apache.org Date: Tue, 06 Dec 2016 13:16:31 -0000 Message-Id: <89e07245295e4b679285afd6bec50676@git.apache.org> In-Reply-To: <5d47c5e8844e48228359c608b6d92c55@git.apache.org> References: <5d47c5e8844e48228359c608b6d92c55@git.apache.org> X-Mailer: ASF-Git Admin Mailer Subject: [4/5] cayenne git commit: Cayenne documentation update - added Modeler reverse engineering tool description - docbook maven plugin updated - all docs updated to proper syntax highlight - css style added for console input/output elements ( tag archived-at: Tue, 06 Dec 2016 13:16:33 -0000 Cayenne documentation update - added Modeler reverse engineering tool description - docbook maven plugin updated - all docs updated to proper syntax highlight - css style added for console input/output elements ( tag) - added links - updated cdbimport parameters - added some info about modeler - auto insert of the year and cayenne version Project: http://git-wip-us.apache.org/repos/asf/cayenne/repo Commit: http://git-wip-us.apache.org/repos/asf/cayenne/commit/34be65a0 Tree: http://git-wip-us.apache.org/repos/asf/cayenne/tree/34be65a0 Diff: http://git-wip-us.apache.org/repos/asf/cayenne/diff/34be65a0 Branch: refs/heads/master Commit: 34be65a0175ddd4919d681b19a52c3d8490f4ec8 Parents: 320495e Author: Nikita Timofeev Authored: Mon Dec 5 17:52:20 2016 +0300 Committer: Nikita Timofeev Committed: Mon Dec 5 17:52:20 2016 +0300 ---------------------------------------------------------------------- .../src/docbkx/cayenne-mapping-structure.xml | 8 +- .../src/docbkx/cayennemodeler-application.xml | 56 +++ .../src/docbkx/customizing-cayenne-runtime.xml | 4 +- .../cayenne-guide/src/docbkx/expressions.xml | 32 +- .../src/docbkx/including-cayenne-in-project.xml | 134 +++--- docs/docbook/cayenne-guide/src/docbkx/index.xml | 5 +- docs/docbook/cayenne-guide/src/docbkx/part4.xml | 1 + .../src/docbkx/performance-tuning.xml | 24 +- .../docbkx/persistent-objects-objectcontext.xml | 4 +- .../cayenne-guide/src/docbkx/queries.xml | 40 +- .../cayenne-guide/src/docbkx/re-filtering.xml | 457 ++++++++++--------- .../src/docbkx/re-introduction.xml | 116 +++-- .../cayenne-guide/src/docbkx/re-modeler.xml | 113 +++++ .../src/docbkx/re-name-generator.xml | 18 +- .../docbkx/re-relationships-loading-control.xml | 20 +- .../cayenne-guide/src/docbkx/re-table-types.xml | 22 +- .../src/docbkx/re-types-mapping.xml | 104 +++-- .../cayenne-guide/src/docbkx/rop-deployment.xml | 2 +- docs/docbook/cayenne-guide/src/docbkx/setup.xml | 64 ++- .../src/docbkx/starting-cayenne.xml | 2 +- .../src/images/re-modeler-datasource-select.png | Bin 0 -> 32658 bytes .../re-modeler-reverseengineering-dialog.png | Bin 0 -> 29936 bytes .../src/main/resources/css/cayenne-doc.css | 8 + .../src/docbkx/index.xml | 2 +- .../src/docbkx/reverse-engineering-ch1.xml | 378 ++++++++------- .../src/docbkx/reverse-engineering-ch2.xml | 286 ++++++------ .../getting-started-rop/src/docbkx/index.xml | 2 +- .../getting-started/src/docbkx/index.xml | 2 +- docs/docbook/pom.xml | 7 +- docs/docbook/upgrade-guide/src/docbkx/index.xml | 2 +- 30 files changed, 1095 insertions(+), 818 deletions(-) ---------------------------------------------------------------------- http://git-wip-us.apache.org/repos/asf/cayenne/blob/34be65a0/docs/docbook/cayenne-guide/src/docbkx/cayenne-mapping-structure.xml ---------------------------------------------------------------------- diff --git a/docs/docbook/cayenne-guide/src/docbkx/cayenne-mapping-structure.xml b/docs/docbook/cayenne-guide/src/docbkx/cayenne-mapping-structure.xml index 6f3fb6c..6d76ef2 100644 --- a/docs/docbook/cayenne-guide/src/docbkx/cayenne-mapping-structure.xml +++ b/docs/docbook/cayenne-guide/src/docbkx/cayenne-mapping-structure.xml @@ -29,13 +29,13 @@ legacy reasons this naming convention is different from the convention for the root project descriptor above, and we may align it in the future versions. Here is how a typical project might look on the file - system:~: ls -l + system:$ ls -l total 24 -rw-r--r-- 1 cayenne staff 491 Jan 28 18:25 cayenne-project.xml --rw-r--r-- 1 cayenne staff 313 Jan 28 18:25 datamap.map.xml +-rw-r--r-- 1 cayenne staff 313 Jan 28 18:25 datamap.map.xml DataMap are referenced by name in the root - descriptor:<map name="datamap"/> - Map files are resolved by Cayenne by appending .map.xml" extension to the + descriptor:<map name="datamap"/> + Map files are resolved by Cayenne by appending ".map.xml" extension to the map name, and resolving the resulting string relative to the root descriptor URI. The following sections discuss varios ORM model objects, without regards to their XML representation. XML format details are really unimportant to the Cayenne users. http://git-wip-us.apache.org/repos/asf/cayenne/blob/34be65a0/docs/docbook/cayenne-guide/src/docbkx/cayennemodeler-application.xml ---------------------------------------------------------------------- diff --git a/docs/docbook/cayenne-guide/src/docbkx/cayennemodeler-application.xml b/docs/docbook/cayenne-guide/src/docbkx/cayennemodeler-application.xml index 4e41174..4c6b9ae 100644 --- a/docs/docbook/cayenne-guide/src/docbkx/cayennemodeler-application.xml +++ b/docs/docbook/cayenne-guide/src/docbkx/cayennemodeler-application.xml @@ -19,21 +19,75 @@ CayenneModeler Application
Working with Mapping Projects +
Reverse Engineering Database + + See chapter Reverse Engineering in Cayenne Modeler +
Generating Database Schema + + With Cayenne Modeler you can create simple database schemas without any additional database tools. + This is a good option for initial database setup if you completely created you model with the Modeler. + You can start SQL schema generation by selecting menu + + Tools > Generate Database Schema + + + + You can select what database parts should be generated and what tables you want +
Migrations +
Generating Java Classes + + Before using Cayenne in you code you need to generate java source code for persistent objects. + This can be done with Modeler GUI or via cgen maven/ant plugin. + + + To generate classes in the modeler use + + Tools > Generate Classes + + + + There is three default types of code generation + + + Standard Persistent Objects + + Default type of generation suitable for almost all cases. + Use this type unless you now what exactly you need to customize. + + + + Client Persistent Objects + + + + + + Advanced. + + In advanced mode you can control almost all aspects of code generation including custom templates for java code. + See default Cayenne templates on + GitHub + as an example + + + +
Modeling Inheritance +
Modeling Generic Persistent Classes @@ -53,8 +107,10 @@
Mapping ObjAttributes to Custom Classes +
Modeling Primary Key Generation Strategy +
http://git-wip-us.apache.org/repos/asf/cayenne/blob/34be65a0/docs/docbook/cayenne-guide/src/docbkx/customizing-cayenne-runtime.xml ---------------------------------------------------------------------- diff --git a/docs/docbook/cayenne-guide/src/docbkx/customizing-cayenne-runtime.xml b/docs/docbook/cayenne-guide/src/docbkx/customizing-cayenne-runtime.xml index 097d7d9..21719ce 100644 --- a/docs/docbook/cayenne-guide/src/docbkx/customizing-cayenne-runtime.xml +++ b/docs/docbook/cayenne-guide/src/docbkx/customizing-cayenne-runtime.xml @@ -184,7 +184,7 @@ binder.bind(Key.get(Service2.class, "i2")).to(Service2Impl.class);public class MyExtensionsModule implements Module { + section are assumed to be placed in an application module "configure" method:public class MyExtensionsModule implements Module { public void configure(Binder binder) { // customizations go here... } @@ -199,7 +199,7 @@ ServerRuntime runtime = Supported property names are listed in "Appendix A". There are two ways to set service properties. The most obvious one is to pass it to the JVM with -D flag on startup. - E.g.java -Dcayenne.server.contexts_sync_strategy=false ... + E.g.$ java -Dcayenne.server.contexts_sync_strategy=false ... A second one is to contribute a property to org.apache.cayenne.configuration.DefaultRuntimeProperties.properties map (see the next section on how to do that). This map contains the default http://git-wip-us.apache.org/repos/asf/cayenne/blob/34be65a0/docs/docbook/cayenne-guide/src/docbkx/expressions.xml ---------------------------------------------------------------------- diff --git a/docs/docbook/cayenne-guide/src/docbkx/expressions.xml b/docs/docbook/cayenne-guide/src/docbkx/expressions.xml index 7e34f71..01deb9c 100644 --- a/docs/docbook/cayenne-guide/src/docbkx/expressions.xml +++ b/docs/docbook/cayenne-guide/src/docbkx/expressions.xml @@ -109,7 +109,7 @@ understanding the semantics. A Cayenne expression can be represented as a String, which can be later converted to an expression object using Expression.fromString static method. Here is an - example:String expString = "name like 'A%' and price < 1000"; + example:String expString = "name like 'A%' and price < 1000"; Expression exp = Expression.fromString(expString);This particular expression may be used to match Paintings with names that start with "A" and a price less than $1000. While this example is pretty self-explanatory, there are a few @@ -120,13 +120,13 @@ Expression exp = Expression.fromString(expString);This may be other entities, for which this expression is valid. Now the expression details... Character constants that are not paths or numeric values should be enclosed in single or double quotes. Two of the expressions below are - equivalent:name = 'ABC' + equivalent:name = 'ABC' // double quotes are escaped inside Java Strings of course name = \"ABC\" Case sensitivity. Expression operators are all case sensitive and are usually lowercase. Complex words follow the java camel-case - style:// valid + style:// valid name likeIgnoreCase 'A%' // invalid - will throw a parse exception @@ -138,7 +138,7 @@ name LIKEIGNORECASE 'A%' optionally prefixed by "obj:" (usually they are not prefixed at all actually). Database expressions are always prefixed with "db:". A special kind of prefix, not discussed yet is "enum:" that prefixes an enumeration - constant:// object path + constant:// object path name = 'Salvador Dali' // same object path - a rarely used form @@ -156,13 +156,13 @@ name = enum:org.foo.EnumClass.VALUE1 Binary conditions are expressions that contain a path on the left, a value on the right, and some operation between them, such as equals, like, etc. They can be used as qualifiers in - SelectQueries:name like 'A%' + SelectQueries:name like 'A%' Named parameters. Expressions can have named parameters (names that start with "$"). Parameterized expressions allow to create reusable expression templates. Also if an Expression contains a complex object that doesn't have a simple String representation (e.g. a Date, a DataObject, an ObjectId), parameterizing such expression is the only way to represent it as String. Here are some - examples:Expression template = Expression.fromString("name = $name"); + examples:Expression template = Expression.fromString("name = $name"); ... Map p1 = Collections.singletonMap("name", "Salvador Dali"); Expression qualifier1 = template.expWithParameters(p1); @@ -171,19 +171,19 @@ Map p2 = Collections.singletonMap("name", "Monet"); Expression qualifier2 = template.expWithParameters(p2);To create a named parameterized expression with a LIKE clause, SQL wildcards must be part of the values in the Map and not the expression string - itself:Expression template = Expression.fromString("name like $name"); + itself:Expression template = Expression.fromString("name like $name"); ... Map p1 = Collections.singletonMap("name", "Salvador%"); Expression qualifier1 = template.expWithParameters(p1);When matching on a relationship, parameters can be Persistent objects or - ObjectIds:Expression template = Expression.fromString("artist = $artist"); + ObjectIds:Expression template = Expression.fromString("artist = $artist"); ... Artist dali = // asume we fetched this one already Map p1 = Collections.singletonMap("artist", dali); Expression qualifier1 = template.expWithParameters(p1);Uninitialized parameters will be automatically pruned from expressions, so a user can omit some parameters when creating an expression from a parameterized - template:Expression template = Expression.fromString("name like $name and dateOfBirth > $date"); + template:Expression template = Expression.fromString("name like $name and dateOfBirth > $date"); ... Map p1 = Collections.singletonMap("name", "Salvador%"); Expression qualifier1 = template.expWithParameters(p1); @@ -211,7 +211,7 @@ Expression qualifier1 = template.expWithParameters(p1); general examples and some gotchas. The following code recreates the expression from the previous chapter, but now using expression - API:// String expression: name like 'A%' and price < 1000 + API:// String expression: name like 'A%' and price < 1000 Expression e1 = ExpressionFactory.likeExp(Painting.NAME_PROPERTY, "A%"); Expression e2 = ExpressionFactory.lessExp(Painting.PRICE_PROPERTY, 1000); Expression finalExp = e1.andExp(e2); This @@ -227,7 +227,7 @@ Expression finalExp = e1.andExp(e2); This control how SQL joins are generated if the same path is encountered more than once in the same Expression. Two ExpressionFactory methods allow to implicitly generate aliases to "split" match paths into individual joins if - needed:Expression matchAllExp(String path, Collection values) + needed:Expression matchAllExp(String path, Collection values) Expression matchAllExp(String path, Object... values) "Path" argument to both of these methods can use a split character (a pipe symbol '|') instead of dot to indicate that relationship following a path should be split into a @@ -243,16 +243,16 @@ Expression matchAllExp(String path, Object... values) is done by the database engine. However the same expressions can also be used for accessing object properties, calculating values, in-memory filtering. Checking whether an object satisfies an - expression:Expression e = ExpressionFactory.inExp(User.NAME_PROPERTY, "John", "Bob"); + expression:Expression e = ExpressionFactory.inExp(User.NAME_PROPERTY, "John", "Bob"); User user = ... if(e.match(user)) { ... }Reading property - value:Expression e = Expression.fromString(User.NAME_PROPERTY); + value:Expression e = Expression.fromString(User.NAME_PROPERTY); String name = e.evaluate(user); Filtering a list of - objects:Expression e = ExpressionFactory.inExp(User.NAME_PROPERTY, "John", "Bob"); + objects:Expression e = ExpressionFactory.inExp(User.NAME_PROPERTY, "John", "Bob"); List<User> unfiltered = ... List<User> filtered = e.filterObjects(unfiltered); @@ -270,7 +270,7 @@ List<User> filtered = e.filterObjects(unfiltered); In some situations, it is convenient to be able to convert Expression instances into EJBQL. Expressions support this conversion. An example is shown below. - String serial = ... + String serial = ... Expression e = ExpressionFactory.matchExp(Pkg.SERIAL_PROPERTY, serial); List<Object> params = new ArrayList<Object>(); EJBQLQuery query = new EJBQLQuery("SELECT p FROM Pkg p WHERE " + e.toEJBQL(params,"p"); @@ -281,7 +281,7 @@ for(int i=0;i<params.size();i++) { This would be equivalent to the following purely EJBQL querying logic; - EJBQLQuery query = new EJBQLQuery("SELECT p FROM Pkg p WHERE p.serial = ?1"); + EJBQLQuery query = new EJBQLQuery("SELECT p FROM Pkg p WHERE p.serial = ?1"); query.setParameter(1,serial); http://git-wip-us.apache.org/repos/asf/cayenne/blob/34be65a0/docs/docbook/cayenne-guide/src/docbkx/including-cayenne-in-project.xml ---------------------------------------------------------------------- diff --git a/docs/docbook/cayenne-guide/src/docbkx/including-cayenne-in-project.xml b/docs/docbook/cayenne-guide/src/docbkx/including-cayenne-in-project.xml index ae7f81a..307a16c 100644 --- a/docs/docbook/cayenne-guide/src/docbkx/including-cayenne-in-project.xml +++ b/docs/docbook/cayenne-guide/src/docbkx/including-cayenne-in-project.xml @@ -22,19 +22,19 @@ Cayenne distribution contains the following core runtime jars in the distribution lib directory: - cayenne-server-x.x.jar - contains full + cayenne-server-.jar - contains full Cayenne runtime (DI, adapters, DB access classes, etc.). Most applications will use only this file. - cayenne-client-x.x.jar - a subset of cayenne-server.jar trimmed for use on + cayenne-client-.jar - a subset of cayenne-server.jar trimmed for use on the client in an ROP application. Other cayenne-* jars - various Cayenne extensions. - When using cayenne-server-x.x.jar you'll need a few third party jars (all + When using cayenne-server-.jar you'll need a few third party jars (all included in lib/third-party directory of the distribution): Apache Velocity @@ -58,10 +58,10 @@ Maven Projects If you are using Maven, you won't have to deal with figuring out the dependencies. You can simply include cayenne-server artifact in your - POM:<dependency> + POM:<dependency> <groupId>org.apache.cayenne</groupId> <artifactId>cayenne-server</artifactId> - <version>X.Y.Z</version> + <version></version> </dependency> Additionally Cayenne provides a Maven plugin with a set of goals to perform various project tasks, such as synching generated Java classes with the mapping, described in the @@ -232,10 +232,10 @@ Example - a typical class generation scenario, where pairs of classes are generated with default Maven source destination and superclass - package:<plugin> + package:<plugin> <groupId>org.apache.cayenne.plugins</groupId> <artifactId>maven-cayenne-plugin</artifactId> - <version>X.Y.Z</version> + <version></version> <configuration> <map>${project.basedir}/src/main/resources/my.map.xml</map> @@ -353,10 +353,10 @@ Example - creating a DB schema on a local HSQLDB - database:<plugin> + database:<plugin> <groupId>org.apache.cayenne.plugins</groupId> <artifactId>maven-cayenne-plugin</artifactId> - <version>X.Y.Z</version> + <version></version> <executions> <execution> <configuration> @@ -378,8 +378,9 @@ cdbimport is a maven-cayenne-plugin goal that generates a DataMap based on an existing database schema. By default, it is bound to the generate-sources phase. This allows you to generate your DataMap prior to building - your project, possibly followed by "cgen" execution to generate the classes. + your project, possibly followed by "cgen" execution to generate the classes. + CDBImport plugin described in details in chapter Reverse Engineering +
@@ -441,13 +442,6 @@ to guess the DB type. - - - - - - - - - - - + - + - + - + @@ -491,41 +480,56 @@ - - - - - - - - - - - - - - - + + + + - + - + @@ -544,10 +548,10 @@ Example - loading a DB schema from a local HSQLDB database (essentially a reverse operation compared to the cdbgen example above) - :<plugin> + :<plugin> <groupId>org.apache.cayenne.plugins</groupId> <artifactId>maven-cayenne-plugin</artifactId> - <version>X.Y.Z</version> + <version></version> <executions> <execution> @@ -578,7 +582,7 @@ cdbimport This is an Ant counterpart of "cdbimport" goal of maven-cayenne-plugin described above. It has exactly the same properties. Here is a usage - example: <cdbimport map="${context.dir}/WEB-INF/my.map.xml" + example: <cdbimport map="${context.dir}/WEB-INF/my.map.xml" driver="com.mysql.jdbc.Driver" url="jdbc:mysql://127.0.0.1/mydb" username="sa" http://git-wip-us.apache.org/repos/asf/cayenne/blob/34be65a0/docs/docbook/cayenne-guide/src/docbkx/index.xml ---------------------------------------------------------------------- diff --git a/docs/docbook/cayenne-guide/src/docbkx/index.xml b/docs/docbook/cayenne-guide/src/docbkx/index.xml index 048bfca..b3dc9f2 100644 --- a/docs/docbook/cayenne-guide/src/docbkx/index.xml +++ b/docs/docbook/cayenne-guide/src/docbkx/index.xml @@ -15,11 +15,12 @@ License. --> + xmlns:xi="http://www.w3.org/2001/XInclude" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" + xsi:schemaLocation="http://docbook.org/xml/5.0/xsd/docbook.xsd" xml:id="cayenne-guide"> Cayenne Guide - 2011-2014 + 2011- Apache Software Foundation and individual authors http://git-wip-us.apache.org/repos/asf/cayenne/blob/34be65a0/docs/docbook/cayenne-guide/src/docbkx/part4.xml ---------------------------------------------------------------------- diff --git a/docs/docbook/cayenne-guide/src/docbkx/part4.xml b/docs/docbook/cayenne-guide/src/docbkx/part4.xml index e6d2a45..9f15bbd 100644 --- a/docs/docbook/cayenne-guide/src/docbkx/part4.xml +++ b/docs/docbook/cayenne-guide/src/docbkx/part4.xml @@ -23,4 +23,5 @@ + http://git-wip-us.apache.org/repos/asf/cayenne/blob/34be65a0/docs/docbook/cayenne-guide/src/docbkx/performance-tuning.xml ---------------------------------------------------------------------- diff --git a/docs/docbook/cayenne-guide/src/docbkx/performance-tuning.xml b/docs/docbook/cayenne-guide/src/docbkx/performance-tuning.xml index 41f2d98..d7a12c5 100644 --- a/docs/docbook/cayenne-guide/src/docbkx/performance-tuning.xml +++ b/docs/docbook/cayenne-guide/src/docbkx/performance-tuning.xml @@ -116,7 +116,7 @@ PrefetchTreeNode.DISJOINT_BY_ID_PREFETCH_SEMANTICS EJBQLQuery queries by employing the "FETCH" keyword. - SELECT a FROM Artist a LEFT JOIN FETCH a.paintings + SELECT a FROM Artist a LEFT JOIN FETCH a.paintings In this case, the Paintings that exist for the Artist will be obtained at the same time @@ -165,12 +165,12 @@ for(DataRow row : rows) { The following example would return a java.util.List of String objects; - SELECT a.name FROM Artist a + SELECT a.name FROM Artist aThe following will yield a java.util.List containing Object[] instances, each of which would contain the name followed by the dateOfBirth value. - SELECT a.name, a.dateOfBirth FROM Artist a + SELECT a.name, a.dateOfBirth FROM Artist aRefer to third-party query language documentation for further detail on this mechanism. @@ -204,7 +204,7 @@ for(DataRow row : rows) { } }Same thing with a - callback:ObjectSelect.query(Artist.class).iterate(context, (Artist a) -> { + callback:ObjectSelect.query(Artist.class).iterate(context, (Artist a) -> { // do something with the object... ... }); @@ -212,7 +212,7 @@ for(DataRow row : rows) { each iteration. This is a common scenario in various data processing jobs - read a batch of objects, process them, commit the results, and then repeat. This allows to further optimize processing (e.g. by avoiding frequent - commits).try(ResultBatchIterator<Artist> it = ObjectSelect.query(Artist.class).iterator(context)) { + commits).try(ResultBatchIterator<Artist> it = ObjectSelect.query(Artist.class).iterator(context)) { for(List<Artist> list : it) { // do something with each list ... @@ -265,7 +265,7 @@ List<Artist> artists = To take advantage of query result caching, the first step is to mark your queries appropriately. Here is an example for ObjectSelect query. Other types of queries have similar - API:ObjectSelect.query(Artist.class).localCache("artists"); + API:ObjectSelect.query(Artist.class).localCache("artists"); This tells Cayenne that the query created here would like to use local cache of the context it is executed against. A vararg parameter to localCache() (or sharedCache()) method contains so called "cache groups". Those are @@ -277,7 +277,7 @@ List<Artist> artists = providers. One such provider available in Cayenne is a provider for EhCache. It can be enabled on ServerRuntime startup in a custom - Module:ServerRuntimeBuilder + Module:ServerRuntimeBuilder .builder() .addModule((binder) -> binder.bind(QueryCache.class).to(EhCacheQueryCache.class) @@ -285,7 +285,7 @@ List<Artist> artists = .build(); By default EhCache reads a file called "ehcache.xml" located on classpath. You can put your cache configuration in that file. - E.g.:<ehcache xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" + E.g.:<ehcache xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="ehcache.xsd" updateCheck="false" monitoring="off" dynamicConfig="false"> @@ -302,7 +302,7 @@ List<Artist> artists = sufficient, and the users want real-time cache invalidation when the data changes. So in addition to those policies, the app can invalidate individual cache groups explicitly with - RefreshQuery:RefreshQuery refresh = new RefreshQuery("artist"); + RefreshQuery:RefreshQuery refresh = new RefreshQuery("artist"); context.performGenericQuery(refresh); The above can be used e.g. to build UI for manual cache invalidation. It is also possible to automate cache refresh when certain entities are committed. This @@ -310,7 +310,7 @@ context.performGenericQuery(refresh); you will need two things: @CacheGroups annotation to mark entities that generate cache invalidation events and  CacheInvalidationFilter that catches the updates to the annotated objects and generates appropriate invalidation - events:// configure filter on startup + events:// configure filter on startup ServerRuntimeBuilder .builder() .addModule((binder) -> @@ -319,7 +319,7 @@ ServerRuntimeBuilder .build(); Now you can associate entities with cache groups, so that commits to those entities would atomatically invalidate the - groups:@CacheGroups("artists") + groups:@CacheGroups("artists") public class Artist extends _Artist { } Finally you may cluster cache group events. They are very small and can be @@ -366,7 +366,7 @@ public class Artist extends _Artist { To do that, set to "false" the following DI property - Constants.SERVER_CONTEXTS_SYNC_PROPERTY, using one of the standard Cayenne DI approaches. E.g. from command - line:java -Dcayenne.server.contexts_sync_strategy=falseOr + line:$ java -Dcayenne.server.contexts_sync_strategy=falseOr by changing the standard properties Map in a custom extensions module:public class MyModule implements Module { http://git-wip-us.apache.org/repos/asf/cayenne/blob/34be65a0/docs/docbook/cayenne-guide/src/docbkx/persistent-objects-objectcontext.xml ---------------------------------------------------------------------- diff --git a/docs/docbook/cayenne-guide/src/docbkx/persistent-objects-objectcontext.xml b/docs/docbook/cayenne-guide/src/docbkx/persistent-objects-objectcontext.xml index 404cab1..6305c9a 100644 --- a/docs/docbook/cayenne-guide/src/docbkx/persistent-objects-objectcontext.xml +++ b/docs/docbook/cayenne-guide/src/docbkx/persistent-objects-objectcontext.xml @@ -267,7 +267,7 @@ generic.writeProperty("name", "New Name");This that spans more than one Cayenne operation. E.g. two sequential commits that need to be rolled back together in case of failure. This can be done via ServerRuntime.performInTransaction - method:Integer result = runtime.performInTransaction(() -> { + method:Integer result = runtime.performInTransaction(() -> { // commit one or more contexts context1.commitChanges(); context2.commitChanges(); @@ -282,7 +282,7 @@ generic.writeProperty("name", "New Name");This When inside the transaction, current thread Transaction object can be accessed via a static method. E.g. here is an example that initializes transaction JDBC connection with a custom connection object - :Transaction tx = BaseTransaction.getThreadTransaction(); + :Transaction tx = BaseTransaction.getThreadTransaction(); tx.addConnection("mydatanode", myConnection); http://git-wip-us.apache.org/repos/asf/cayenne/blob/34be65a0/docs/docbook/cayenne-guide/src/docbkx/queries.xml ---------------------------------------------------------------------- diff --git a/docs/docbook/cayenne-guide/src/docbkx/queries.xml b/docs/docbook/cayenne-guide/src/docbkx/queries.xml index 2b3b580..37254d4 100644 --- a/docs/docbook/cayenne-guide/src/docbkx/queries.xml +++ b/docs/docbook/cayenne-guide/src/docbkx/queries.xml @@ -50,8 +50,8 @@ query:List<Artist> objects = ObjectSelect.query(Artist.class).select(context);This returned all rows in the "ARTIST" table. If the logs were turned on, you might see the following SQL - printed:INFO: SELECT t0.DATE_OF_BIRTH, t0.NAME, t0.ID FROM ARTIST t0 -INFO: === returned 5 row. - took 5 ms. + printed:INFO: SELECT t0.DATE_OF_BIRTH, t0.NAME, t0.ID FROM ARTIST t0 +INFO: === returned 5 row. - took 5 ms. This SQL was generated by Cayenne from the ObjectSelect above. ObjectSelect can have a qualifier to select only the data matching specific criteria. Qualifier is simply an Expression (Expressions where discussed in the previous chapter), appended to the query @@ -61,9 +61,9 @@ INFO: === returned 5 row. - took 5 ms. .where(Artist.NAME.like("Pablo%")) .select(context);The SQL will look different this - time:INFO: SELECT t0.DATE_OF_BIRTH, t0.NAME, t0.ID FROM ARTIST t0 WHERE t0.NAME LIKE ? + time:INFO: SELECT t0.DATE_OF_BIRTH, t0.NAME, t0.ID FROM ARTIST t0 WHERE t0.NAME LIKE ? [bind: 1->NAME:'Pablo%'] -INFO: === returned 1 row. - took 6 ms. +INFO: === returned 1 row. - took 6 ms.ObjectSelect allows to assemble qualifier from parts, using "and" and "or" method to chain then together:List<Artist> objects = ObjectSelect.query(Artist.class) @@ -120,28 +120,26 @@ List<String> names = context.performQuery(query); example would require three individual positional parameters (named parameters could also have been used) to be supplied. - select p from Painting p where p.paintingTitle in (?1,?2,?3) + select p from Painting p where p.paintingTitle in (?1,?2,?3) The following example requires a single positional parameter to be supplied. The parameter can be any concrete implementation of the java.util.Collection interface such as java.util.List or java.util.Set. - select p from Painting p where p.paintingTitle in ?1 + select p from Painting p where p.paintingTitle in ?1 The following example is functionally identical to the one prior. - select p from Painting p where p.paintingTitle in (?1) + select p from Painting p where p.paintingTitle in (?1) + - It is - possible to convert - an - Expression - object used with a - SelectQuery + It is possible to convert + an Expression + object used with a SelectQuery to EJBQL. Use the Expression#appendAsEJBQL methods for this purpose. - + While Cayenne Expressions discussed previously can be thought of as identical to JPQL WHERE clause, and indeed they are very close, there are a few noteable differences: @@ -278,7 +276,7 @@ query.setParameters(Collections.singletonMap("tableName", "mydb.PAINTING")); #bind($xyz 'VARCHAR') #bind($xyz 'DECIMAL' 2)Full - example:update ARTIST set NAME = #bind($name) where ID = #bind($id) + example:update ARTIST set NAME = #bind($name) where ID = #bind($id)
#bindEqual @@ -303,7 +301,7 @@ query.setParameters(Collections.singletonMap("tableName", "mydb.PAINTING")); #bindEqual($xyz 'VARCHAR') #bindEqual($xyz 'DECIMAL' 2) Full - example:update ARTIST set NAME = #bind($name) where ID #bindEqual($id) + example:update ARTIST set NAME = #bind($name) where ID #bindEqual($id)
#bindNotEqual @@ -321,7 +319,7 @@ query.setParameters(Collections.singletonMap("tableName", "mydb.PAINTING")); #bindNotEqual($xyz 'VARCHAR') #bindNotEqual($xyz 'DECIMAL' 2) Full - example:update ARTIST set NAME = #bind($name) where ID #bindEqual($id) + example:update ARTIST set NAME = #bind($name) where ID #bindEqual($id)
#bindObjectEqual @@ -427,7 +425,7 @@ select.setParameters(Collections.singletonMap("a", a)); #result('DOB' 'java.util.Date' '' 'artist.DATE_OF_BIRTH') #result('SALARY' 'float') Full - example:SELECT #result('ID' 'int'), #result('NAME' 'String'), #result('DATE_OF_BIRTH' 'java.util.Date') FROM ARTIST + example:SELECT #result('ID' 'int'), #result('NAME' 'String'), #result('DATE_OF_BIRTH' 'java.util.Date') FROM ARTIST
#chain and #chunk @@ -451,8 +449,8 @@ select.setParameters(Collections.singletonMap("a", a)); #chunk(param) ... #end Full example:#chain('OR' 'WHERE') - #chunk($name) NAME LIKE #bind($name) #end" - #chunk($id) ARTIST_ID > #bind($id) #end" + #chunk($name) NAME LIKE #bind($name) #end + #chunk($id) ARTIST_ID > #bind($id) #end #end"
@@ -474,7 +472,7 @@ List<Artist> artists = context.performQuery(query);Just useful with SQLTemplate, as the result type most often than not does not represent a Cayenne entity, but instead may be some aggregated report or any other data whose object structure is opaque to - Cayenne:String sql = SELECT t0.NAME, COUNT(1) FROM ARTIST t0 JOIN PAINTING t1 ON (t0.ID = t1.ARTIST_ID) " + Cayenne:String sql = "SELECT t0.NAME, COUNT(1) FROM ARTIST t0 JOIN PAINTING t1 ON (t0.ID = t1.ARTIST_ID) " + "GROUP BY t0.NAME ORDER BY COUNT(1)"; SQLTemplate query = new SQLTemplate(Artist.class, sql); http://git-wip-us.apache.org/repos/asf/cayenne/blob/34be65a0/docs/docbook/cayenne-guide/src/docbkx/re-filtering.xml ---------------------------------------------------------------------- diff --git a/docs/docbook/cayenne-guide/src/docbkx/re-filtering.xml b/docs/docbook/cayenne-guide/src/docbkx/re-filtering.xml index b36c7f2..930448b 100644 --- a/docs/docbook/cayenne-guide/src/docbkx/re-filtering.xml +++ b/docs/docbook/cayenne-guide/src/docbkx/re-filtering.xml @@ -21,38 +21,40 @@ Basic syntax is described below: - <!-- Ant/Maven in case you only want to specify the schema to import --> - <schema>SCHEMA_NAME</schema> - - <!-- Maven way in case you have nested elements in the schema --> - <schema> - <name>SCHEMA_NAME</name> - ... - </schema> - - <!-- Ant way in case you have nested elements in the schema --> - <schema name="SCHEMA_NAME"> - ... - </schema> + <!-- Ant/Maven in case you only want to specify the schema to import --> + <schema>SCHEMA_NAME</schema> + + <!-- Maven way in case you have nested elements in the schema --> + <schema> + <name>SCHEMA_NAME</name> + ... + </schema> + + <!-- Ant way in case you have nested elements in the schema --> + <schema name="SCHEMA_NAME"> + ... + </schema> The same options are available for catalogs: - <!-- Ant/Maven in case you only want to specify the catalog to import --> - <catalog>CATALOG_NAME</catalog> - - <!-- Maven way in case you have nested elements in the catalog --> - <catalog> - <name>CATALOG_NAME</name> - ... - </catalog> - - <!-- Ant way in case you have nested elements in the catalog --> - <catalog name="CATALOG_NAME"> - ... - </catalog> - + <!-- Ant/Maven in case you only want to specify the catalog to import --> + <catalog>CATALOG_NAME</catalog> + + <!-- Maven way in case you have nested elements in the catalog --> + <catalog> + <name>CATALOG_NAME</name> + ... + </catalog> + + <!-- Ant way in case you have nested elements in the catalog --> + <catalog name="CATALOG_NAME"> + ... + </catalog> + + Current version of reverse engineering doesn't support catalog filtering for Postgres database. +
Combine Schema and Catalog filters @@ -60,31 +62,32 @@ Cayenne supports combination of different schemas and catalogs, and it filters data according to your requirements. You could achieve this by the following example of reverse engineering configuration: - - <configuration> - ... - <reverseEngineering> - - <catalog name="shop_01"> - <schema name="schema-name-01"/> - <schema name="schema-name-02"/> - <schema name="schema-name-03"/> - </catalog> - - <catalog name="shop_02"> - <schema name="schema-name-01"/> - </catalog> - - <catalog name="shop_03"> - <schema name="schema-name-01"/> - <schema name="schema-name-02"/> - <schema name="schema-name-03"/> - </catalog> - - </reverseEngineering> - ... - </configuration> - + <configuration> + ... + <reverseEngineering> + + <catalog> + <name>shop_01</name> + <schema>schema-name-01</schema> + <schema>schema-name-02</schema> + <schema>schema-name-03</schema> + </catalog> + + <catalog> + <name>shop_02</name> + <schema>schema-name-01</schema> + </catalog> + + <catalog> + <name>shop_03</name> + <schema>schema-name-01</schema> + <schema>schema-name-02</schema> + <schema>schema-name-03</schema> + </catalog> + + </reverseEngineering> + ... +</configuration> In the example above, Cayenne reverse engineering process contains three catalogs named as shop_01, shop_02 and shop_03, each of wich has their own schemas. Cayenne will load all data only from the declared catalogs and schemas. @@ -92,138 +95,137 @@ If you want to load everything from database, you could simply declare catalog specification alone. - - <configuration> - ... - <reverseEngineering> - - <catalog name="shop_01"/> - <catalog name="shop_02"/> - <catalog name="shop_03"/> - - </reverseEngineering> - ... - </configuration> - + <configuration> + ... + <reverseEngineering> + + <catalog>shop_01</catalog> + <catalog>shop_02</catalog> + <catalog>shop_03</catalog> + + </reverseEngineering> + ... +</configuration> If you want to do reverse engineering for specific schemas, just remove unwanted schemas from the catalog section. For example, if you want to process schema-name-01 and schema-name-03 schemas only, then you should change reverse engineering section like this. - - <configuration> - ... - <reverseEngineering> - - <catalog name="shop_01"> - <schema name="schema-name-01"/> - <schema name="schema-name-03"/> - </catalog> - - <catalog name="shop_02"> - <schema name="schema-name-01"/> - </catalog> - - <catalog name="shop_03"> - <schema name="schema-name-01"/> - <schema name="schema-name-03"/> - </catalog> - - </reverseEngineering> - ... - </configuration> - + <configuration> + ... + <reverseEngineering> + + <catalog> + <name>shop_01</name> + <schema>schema-name-01</schema> + <schema>schema-name-03</schema> + </catalog> + + <catalog> + <name>shop_02</name> + <schema>schema-name-01</schema> + </catalog> + + <catalog> + <name>shop_03</name> + <schema>schema-name-01</schema> + <schema>schema-name-03</schema> + </catalog> + + </reverseEngineering> + ... +</configuration>
Including and Excluding tables, columns and procedures - + Cayenne reverse engineering let you fine tune table, columns and stored procedures names that you need to import + to your model file. In every filter you can use regexp syntax. Here is some examples of configuration + for common tasks. + + + - 1. Include tables with ‘CRM_’ prefix if you are working in that domain of application: + Include tables with ‘CRM_’ prefix if you are working in that domain of application: - <includeTable>CRM_.*</includeTable> - + <includeTable>CRM_.*</includeTable> - 2. Include tables with ‘_LOOKUP’ suffix + Include tables with ‘_LOOKUP’ suffix - <includeTable> - <pattern>.*_LOOKUP</pattern> - </includeTable> - + <includeTable> + <pattern>.*_LOOKUP</pattern> + </includeTable> - 3. Exclude tables with ‘CRM_’ prefix if you are not working only in that domain of application: + Exclude tables with ‘CRM_’ prefix if you are not working only in that domain of application: - <excludeTable>CRM_.*</excludeTable> - + <excludeTable>CRM_.*</excludeTable> - 4. Include only specific columns that follows specific naming convention: + Include only specific columns that follows specific naming convention: - <includeColumn>includeColumn01</includeColumn> - <includeColumn pattern="includeColumn03"/> - + <includeColumn>includeColumn01</includeColumn> + <includeColumn>includeColumn03</includeColumn> - 5. Exclude system or obsolete columns: + Exclude system or obsolete columns: - <excludeColumn>excludeColumn01</excludeColumn> - <excludeColumn pattern="excludeColumn03"/> - + <excludeColumn>excludeColumn01</excludeColumn> + <excludeColumn>excludeColumn03</excludeColumn> - 6. Include/Exclude columns for particular table or group of tables: + Include/Exclude columns for particular table or group of tables: - <includeTable pattern="table pattern"> - <includeColumn pattern="includeColumn01"/> - <excludeColumn pattern="excludeColumn01"/> - </includeTable> - + <includeTable> + <pattern>table pattern</pattern> + <includeColumn>includeColumn01</includeColumn> + <excludeColumn>excludeColumn01</excludeColumn> + </includeTable> - 7. Include stored procedures: + Include stored procedures: - <includeProcedure>includeProcedure01</includeProcedure> - <includeProcedure pattern="includeProcedure03"/> - + <includeProcedure>includeProcedure01</includeProcedure> + <includeProcedure> + <pattern>includeProcedure03</pattern> + </includeProcedure> - 8. Exclude stored procedures by pattern: + Exclude stored procedures by pattern: - <excludeProcedure>excludeProcedure01</excludeProcedure> - <excludeProcedure pattern="excludeProcedure03"/> - + <excludeProcedure>excludeProcedure01</excludeProcedure> + <excludeProcedure> + <pattern>excludeProcedure03</pattern> + </excludeProcedure> - + - All filtering tags includeTable, excludeTable, includeColumn, excludeColumn, includeProcedure and excludeProcedure have three ways + All filtering tags <includeTable>, <excludeTable>, <includeColumn>, <excludeColumn>, + <includeProcedure> and <excludeProcedure> have three ways to pass filtering RegExp. - + text inside tag - <includeTable>CRM_.*</includeTable> - + <includeTable>CRM_.*</includeTable> - pattern attribute + pattern inner tag - <excludeProcedure pattern="excludeProcedure03"/> - + <includeTable> + <pattern>.*_LOOKUP</pattern> + </includeTable> - pattern inner tag + pattern attribute (only for Ant task) - <includeTable> - <pattern>.*_LOOKUP</pattern> - </includeTable> - + <excludeProcedure pattern="excludeProcedure03"/> - + - All filtering tags can be placed inside schema and catalog tags, but also inside <reverseEngineering> tag. It means that filtering rules + All filtering tags can be placed inside schema and catalog tags, but also inside <reverseEngineering> tag. It means that filtering rules will be applied for all schemas and catalogs.
@@ -233,22 +235,21 @@ Initially, let’s make a small sample. Consider the following reverse engineering configuration. - <reverseEngineering> - <catalog>shop-01</catalog> - </reverseEngineering> - + <reverseEngineering> + <catalog>shop-01</catalog> + </reverseEngineering>
In this case reverse engineering will not filter anything from the shop-01 catalog. If you really want to filter database columns, tables, stored procedures and relationships, you could do it in the following way. - <reverseEngineering> - <catalog>shop-01</catalog> - <catalog name="shop-02"> - <includeTable>includeTable-01</includeTable> - </catalog> - </reverseEngineering> - + <reverseEngineering> + <catalog>shop-01</catalog> + <catalog> + <name>shop-02</name> + <includeTable>includeTable-01</includeTable> + </catalog> + </reverseEngineering> Then Cayenne will do reverse engineering for both shop-01 and shop-02 catalogs. First catalog will not be processed for filtering, but the second catalog will be processed with “includeTable-01” filter. @@ -260,69 +261,60 @@ Let’s see how to use patterns in reverse engineering configuration with complete example. - <reverseEngineering> - - <catalog>shop-01</catalog> - - <catalog> - <name>shop-02</name> - </catalog> - - <catalog name="shop-03"> - <includeTable>includeTable-01</includeTable> - - <includeTable> - <pattern>includeTable-02</pattern> - </includeTable> - - <includeTable pattern="includeTable-03"> - <includeColumn pattern="includeColumn-01"/> - <excludeColumn pattern="excludeColumn-01"/> - </includeTable> - - <excludeTable>excludeTable-01</excludeTable> - - <excludeTable> - <pattern>excludeTable-02</pattern> - </excludeTable> - - <excludeTable pattern="excludeTable-03"/> - - <includeColumn>includeColumn-01</includeColumn> - - <includeColumn> - <pattern>includeColumn-02</pattern> - </includeColumn> - - <includeColumn pattern="includeColumn-03"/> - - <excludeColumn>excludeColumn-01</excludeColumn> - - <excludeColumn> - <pattern>excludeColumn-02</pattern> - </excludeColumn> - - <excludeColumn pattern="excludeColumn-03"/> - - <includeProcedure>includeProcedure-01</includeProcedure> - - <includeProcedure> - <pattern>includeProcedure-02</pattern> - </includeProcedure> - - <includeProcedure pattern="includeProcedure-03"/> - - <excludeProcedure>excludeProcedure-01</excludeProcedure> - - <excludeProcedure> - <pattern>excludeProcedure-02</pattern> - </excludeProcedure> - - <excludeProcedure pattern="excludeProcedure-03"/> - - </catalog> - </reverseEngineering> - + <reverseEngineering> + + <catalog>shop-01</catalog> + + <catalog> + <name>shop-02</name> + </catalog> + + <catalog> + <name>shop-03</name> + <includeTable>includeTable-01</includeTable> + + <includeTable> + <pattern>includeTable-02</pattern> + </includeTable> + + <includeTable> + <pattern>includeTable-03</pattern> + <includeColumn>includeColumn-01</includeColumn> + <excludeColumn>excludeColumn-01</excludeColumn> + </includeTable> + + <excludeTable>excludeTable-01</excludeTable> + + <excludeTable> + <pattern>excludeTable-02</pattern> + </excludeTable> + + <includeColumn>includeColumn-01</includeColumn> + + <includeColumn> + <pattern>includeColumn-02</pattern> + </includeColumn> + + <excludeColumn>excludeColumn-01</excludeColumn> + + <excludeColumn> + <pattern>excludeColumn-02</pattern> + </excludeColumn> + + <includeProcedure>includeProcedure-01</includeProcedure> + + <includeProcedure> + <pattern>includeProcedure-02</pattern> + </includeProcedure> + + <excludeProcedure>excludeProcedure-01</excludeProcedure> + + <excludeProcedure> + <pattern>excludeProcedure-02</pattern> + </excludeProcedure> + + </catalog> + </reverseEngineering> The example above should provide you more idea about how to use filtering and patterns in Cayenne reverse engineering. You could notice that this example demonstrates you the "name" and "pattern" configurations. Yes, you could use these as separates xml element @@ -333,4 +325,47 @@ and table columns. As “shop-03” has variety filter tags, entities from this catalog will be filtered by cdbimport. +
+ Ant configuration example + + Here is config sample for Ant task: + +<!-- inside <cdbimport> tag --> + <catalog>shop-01</catalog> + + <catalog name="shop-02"/> + + <catalog name="shop-03"> + + <includeTable>includeTable-01</includeTable> + <includeTable pattern="includeTable-02"/> + + <includeTable pattern="includeTable-03"> + <includeColumn>includeColumn-01</includeColumn> + <excludeColumn>excludeColumn-01</excludeColumn> + </includeTable> + + <excludeTable>excludeTable-01</excludeTable> + <excludeTable pattern="excludeTable-02"/> + + <includeColumn>includeColumn-01</includeColumn> + <includeColumn pattern="includeColumn-02"/> + + <excludeColumn>excludeColumn-01</excludeColumn> + <excludeColumn pattern="excludeColumn-02"/> + + <includeProcedure>includeProcedure-01</includeProcedure> + <includeProcedure pattern="includeProcedure-02"/> + + <excludeProcedure>excludeProcedure-01</excludeProcedure> + <excludeProcedure pattern="excludeProcedure-02"/> + + </catalog> + + + + In Ant task configuration all filter tags located inside root tag <cdbimport> as there is no <reverseEngineering> tag. + + +
http://git-wip-us.apache.org/repos/asf/cayenne/blob/34be65a0/docs/docbook/cayenne-guide/src/docbkx/re-introduction.xml ---------------------------------------------------------------------- diff --git a/docs/docbook/cayenne-guide/src/docbkx/re-introduction.xml b/docs/docbook/cayenne-guide/src/docbkx/re-introduction.xml index 3125a7b..ca2a7fa 100644 --- a/docs/docbook/cayenne-guide/src/docbkx/re-introduction.xml +++ b/docs/docbook/cayenne-guide/src/docbkx/re-introduction.xml @@ -23,25 +23,87 @@ CDBImport is a Maven/Ant plugin that helps you to do reverse engineering. In other words it helps you to synchronize database structure with your Cayenne mapping config. It does not update Java classes by itself, but it synchronizes db and data access layer representation in Cayenne mapping file with actual database state. - Most common practice to complete reverse engineering is to use CDBImport followed by CGen Maven plugin, + Most common practice to complete reverse engineering is to use CDBImport followed by CGen Maven plugin, which does class generation according to the Cayenne mapping file updates. + + Here is simple maven configuration to start with: + + <plugin> + <groupId>org.apache.cayenne.plugins</groupId> + <artifactId>maven-cayenne-plugin</artifactId> + <version></version> + + <configuration> + <map>${project.basedir}/src/main/resources/datamap.map.xml</map> + <url><!-- jdbc url --></url> + <driver><!-- jdbc driver class --></driver> + <username>username</username> + <password>password</password> + <defaultPackage>com.example.package</defaultPackage> + </configuration> + + <executions> + <execution> + <goals> + <goal>cdbimport</goal> + <goal>cgen</goal> + </goals> + </execution> + </executions> + + <dependencies> + <!-- jdbc driver dependency --> + </dependencies> + </plugin> + + For full list of cdbimport parameters see chapter Including Cayenne in a Project +
- Reverse Engineering configuration file + Reverse Engineering configuration + + Cayenne is designed to support database reverse engineering automation process via Maven and Ant build tools. + + - Cayenne is designed to support database reverse engineering automation process via Maven and Ant build tools. - You could control and configure this process in the several ways: + Here is a default template of reverse engineering settings, which should help you to get started: + + <plugin> + ... + <configuration> + ... + <reverseEngineering> + <skipRelationshipsLoading>false</skipRelationshipsLoading> + <skipPrimaryKeyLoading>false</skipPrimaryKeyLoading> + + <catalog> + <schema> + <includeTable> + </includeTable> + </schema> + </catalog> + <includeProcedure>.*</includeProcedure> + </reverseEngineering> + </configuration> + ... + </plugin> + + The whole database structure will be loaded after execution reverse engineering with this stub. - + + + + In the next chapter we will see more configuration details
cdbimport required parameters
catalogStringA database catalog to import tables/stored procedures from. This can - be a pattern in the format supported by - DatabaseMetadata.getTables(). I.e. it can contain '%' wildcard.
defaultPackage String A Java package that will be set as the imported DataMap default and @@ -457,29 +451,24 @@ no package, and will not compile.
excludeTablesbooleanA comma-separated list of Perl5 patterns that defines which table names should be skipped - during the import. This (together with 'includeTables') is the most - flexible way to filter the table list. Another way to filter it is - via "tablePattern" that is limited to filtering with a single - wildcard pattern as defined in DatabaseMetadata.getTables().
includeTablesforceDataMapCatalog booleanA comma-separated list of Perl5 patterns that defines which table names should be - included during the import. Additionally matching tables will be - compared with "excludeTables" pattern. If they match include and - exclude, they will be skipped. Another way to filter it is via - "tablePattern" that is limited to filtering with a single wildcard - pattern as defined in DatabaseMetadata.getTables(). + Automatically tagging each DbEntity with the actual DB catalog/schema (default behavior) + may some time be undesirable. If this is the case then setting forceDataMapCatalog + to true will set DbEntity catalog to one in the DataMap. + Default value false. +
importProceduresforceDataMapSchema booleanIndicates whether stored procedures should be imported from the - database. Default is false. + Automatically tagging each DbEntity with the actual DB catalog/schema (default behavior) + may some time be undesirable. If this is the case then setting forceDataMapSchema + to true will set DbEntity schema to one in the DataMap. + Default value false. +
meaningfulPkTables
namingStrategy StringThe naming strategy used for mapping database names to object entity - names. Default is - org.apache.cayenne.map.naming.SmartNameGenerator. + + The naming strategy used for mapping database names to object entity + names. Default is org.apache.cayenne.dbsync.naming.DefaultObjectNameGenerator.
overwritebooleanIf true (default), deletes all existing mappings before starting scheman import. If - false, already existing entities are preserved.
password String Database user password.
procedurePatternStringPattern to match stored procedure names against for import. Default - is to match all stored procedures. This value is only meaningful if - importProcedures is true.
schemaStringA database schema to import tables/stored procedures from. This can - be a pattern in the format supported by - DatabaseMetadata.getTables(). I.e. it can contain '%' wildcard.
reverseEngineeringXML + An object that contains detailed reverse engineering rules about + what DB objects should be processed. + For full information about this parameter see + reverse engineering chapter. + Here is some simple example: + <reverseEngineering> + <skipRelationshipsLoading>false</skipRelationshipsLoading> + <skipPrimaryKeyLoading>false</skipPrimaryKeyLoading> + + <catalog name="test_catalog"> + <schema name="test_schema"> + <includeTable pattern=".*"/> + <excludeTable>test_table</excludeTable> + </schema> + </catalog> + + <includeProcedure pattern=".*"/> +</reverseEngineering> + +
tablePatternstripFromTableNames StringPattern to match table names against for import. Default is to match - all tables. + Regex that matches the part of the table name that needs to be stripped off. + Here is some examples: + +^myt_ + + +_s$ + + +_abc]]> +
username