db-derby-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From publicay...@verizon.net
Subject Re: inserts slowing down after 2.5m rows
Date Fri, 27 Feb 2009 20:41:02 GMT

  The application is running on a client machine. I'm not sure how to 
tell if there's a different disk available that I could log to.

If checkpoint is causing this delay, how to a manage that? Can I turn 
checkpointing off? I already have durability set to test; I'm not 
concerned about recovering from a crashed db.

Brian

On Fri, Feb 27, 2009 at  9:34 AM, Peter Ondruška wrote:

> Could be checkpoint.. BTW to speed up bulk load you may want to use
large log files located separately from data disks.

2009/2/27, Brian Peterson <dianeayers@verizon.net 
<mailto:dianeayers@verizon.net>   <mailto:dianeayers@verizon.net> >:
> I have a big table that gets a lot of inserts. Rows are inserted 10k 
> at a
> time with a table function. At around 2.5 million rows, inserts slow 
> down
> from 2-7s to around 15-20s. The table's dat file is around 800-900M.
>
>
>
> I have durability set to "test", table-level locks, a primary key 
> index and
> another 2-column index on the table. Page size is at the max and page 
> cache
> set to 4500 pages. The table gets compressed (inplace) every 500,000 
> rows.
> I'm using Derby 10.4 with JDK 1.6.0_07, running on Windows XP. I've 
> ruled
> out anything from the rest of the application, including GC (memory 
> usage
> follows a consistent pattern during the whole load). It is a local 
> file
> system. The database has a fixed number of tables (so there's a fixed 
> number
> of dat files in the database directory the whole time). The logs are 
> getting
> cleaned up, so there's only a few dat files in the log directory as 
> well.
>
>
>
> Any ideas what might be causing the big slowdown after so many loads?
>
>
>
> Brian
>
>
>
>

Mime
View raw message