cayenne-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Pascal Robert <prob...@druide.com>
Subject Re: Best method to fetch a lot of objects
Date Fri, 19 May 2017 14:25:42 GMT
So far, so good. But I’m getting this:

...
10:14:31.507 [main] INFO org.apache.cayenne.log.CommonsJdbcEventLogger - +++ transaction committed.
10:14:31.508 [main] INFO org.apache.cayenne.log.CommonsJdbcEventLogger - LOCK TABLES AUTO_PK_SUPPORT
WRITE
10:14:31.516 [main] INFO org.apache.cayenne.log.CommonsJdbcEventLogger - UNLOCK TABLES
10:14:31.518 [main] INFO org.apache.cayenne.log.CommonsJdbcEventLogger - === returned 4294
rows. - took 254959 ms.
10:14:31.522 [main] INFO org.apache.cayenne.log.CommonsJdbcEventLogger - +++ transaction committed.
org.apache.cayenne.CayenneRuntimeException: [v.4.0.M5 Feb 24 2017 07:47:55] Commit Exception
	at org.apache.cayenne.access.DataContext.flushToParent(DataContext.java:774)
	at org.apache.cayenne.access.DataContext.commitChanges(DataContext.java:691)
	at com.druide.filemaker.migrations.Migration.faitMigrationUtilisateurs(Migration.java:202)
	at com.druide.filemaker.migrations.Migration.main(Migration.java:36)
Caused by: com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: Table 'filemaker.auto_pk_support'
doesn't exist

The PK is copied from the FileMaker entity, so I don’t know why it’s trying to find a
filemaker.auto_pk_support table.

> Le 19 mai 2017 à 09:49, Andrus Adamchik <andrus@objectstyle.org> a écrit :
> 
> I concur with Mike on the suggestion. Though I would recommend using vastly improved
4.0 API:
> 
> http://cayenne.apache.org/docs/4.0/cayenne-guide/performance-tuning.html#iterated-queries
> 
>> As you iterate over your entire record set, you can convert the DataRows
>> into Cayenne objects
> 
> In 4.0 you can iterate over objects.
> 
>> Gather up 50 or 100 or 1000 
> 
> In 4.0 you can use batch iterator to receive the stream already split in batches. Docs
example actually has a typo. Batch iterator looks like this:
> 
> try(ResultBatchIterator<Artist> it = 
>    ObjectSelect.query(Artist.class).batchIterator(context, batchSize)) {
> 
>    for(List<Artist> list : it) {
>       ...
>       context.commitChanges();
>    }           
> }
> 
> Andrus
> 
> 
>> On May 19, 2017, at 4:39 PM, Michael Gentry <blacknext@gmail.com> wrote:
>> 
>> Hi Pascal,
>> 
>> I suspect you need to utilize an iterated query:
>> 
>> http://cayenne.apache.org/docs/3.1/cayenne-guide/performance-tuning.html#iterated-queries
>> 
>> As you iterate over your entire record set, you can convert the DataRows
>> into Cayenne objects (see the section in the documentation above the
>> iterated queries documentation) in a *different* DataContext.  Gather up 50
>> or 100 or 1000 (whatever number feels good to you) in that second
>> DataContext and then commit them, throw away that DataContext and create a
>> new one.  Repeat.  This should keep your memory usage fairly constant and
>> allow you to process arbitrarily large record sizes.
>> 
>> mrg
>> 
>> 
>> On Fri, May 19, 2017 at 9:27 AM, Pascal Robert <probert@druide.com> wrote:
>> 
>>> Hi,
>>> 
>>> I’m still in my FileMaker -> MySQL migration project. This time, I want
to
>>> migrate a FileMaker table who have 445 244 records in it. If I fetch
>>> everything into an object entity for each row, I’m getting a Java heap
>>> space problem, which is somewhat expected by the size of the result set.
>>> 
>>> If I call setFetchLimit() with a 10 000 limit, works fine. FileMaker
>>> doesn’t support fetch limits, so I can’t do something on that side.
>>> 
>>> Any tips?
> 


Mime
View raw message