From derby-user-return-10609-apmail-db-derby-user-archive=db.apache.org@db.apache.org Sat Feb 28 04:55:51 2009 Return-Path: Delivered-To: apmail-db-derby-user-archive@www.apache.org Received: (qmail 29759 invoked from network); 28 Feb 2009 04:55:51 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (140.211.11.2) by minotaur.apache.org with SMTP; 28 Feb 2009 04:55:51 -0000 Received: (qmail 95690 invoked by uid 500); 28 Feb 2009 04:55:50 -0000 Delivered-To: apmail-db-derby-user-archive@db.apache.org Received: (qmail 95664 invoked by uid 500); 28 Feb 2009 04:55:50 -0000 Mailing-List: contact derby-user-help@db.apache.org; run by ezmlm Precedence: bulk list-help: list-unsubscribe: List-Post: List-Id: Reply-To: "Derby Discussion" Delivered-To: mailing list derby-user@db.apache.org Received: (qmail 95653 invoked by uid 99); 28 Feb 2009 04:55:50 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 27 Feb 2009 20:55:50 -0800 X-ASF-Spam-Status: No, hits=2.2 required=10.0 tests=FM_FAKE_HELO_VERIZON,HTML_MESSAGE,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of publicayers@verizon.net designates 206.46.173.9 as permitted sender) Received: from [206.46.173.9] (HELO vms173009pub.verizon.net) (206.46.173.9) by apache.org (qpsmtpd/0.29) with ESMTP; Sat, 28 Feb 2009 04:55:40 +0000 Received: from Major ([96.244.53.110]) by vms173009.mailsrvcs.net (Sun Java(tm) System Messaging Server 6.3-7.04 (built Sep 26 2008; 32bit)) with ESMTPA id <0KFR009UKEZTOERM@vms173009.mailsrvcs.net> for derby-user@db.apache.org; Fri, 27 Feb 2009 22:55:11 -0600 (CST) From: "Brian Peterson" To: "'Derby Discussion'" References: <1972272636.15122841235789268541.JavaMail.javamailuser@localhost> <20090228040251.49F0A5DB6D@dbrack01.segel.com> In-reply-to: <20090228040251.49F0A5DB6D@dbrack01.segel.com> Subject: RE: inserts slowing down after 2.5m rows Date: Fri, 27 Feb 2009 23:55:03 -0500 Message-id: <000901c99960$b81a20c0$284e6240$@net> MIME-version: 1.0 Content-type: multipart/alternative; boundary="----=_NextPart_000_000A_01C99936.CF4418C0" X-Mailer: Microsoft Office Outlook 12.0 Thread-index: AcmZV/p11BgBj6x9TDekXToARr57EQACIYaQAAAFmwA= Content-language: en-us X-Virus-Checked: Checked by ClamAV on apache.org This is a multipart message in MIME format. ------=_NextPart_000_000A_01C99936.CF4418C0 Content-Type: text/plain; charset="iso-8859-2" Content-Transfer-Encoding: quoted-printable I thought I read in the documentation that 1000 was the max initial = pages you could allocate, and after that, Derby allocates a page at a time. Is there some other setting for getting it to allocate more at a time? =20 Brian =20 From: Michael Segel [mailto:msegel@segel.com] On Behalf Of = derby@segel.com Sent: Friday, February 27, 2009 9:59 PM To: 'Derby Discussion' Subject: RE: inserts slowing down after 2.5m rows =20 Ok,=20 =20 For testing, if you allocate 2000 pages, then if my thinking is ok, then you'll fly along until you get until 2100 pages. =20 It sounds like you're hitting a bit of a snag where after your initial allocation of pages, Derby is only allocating a smaller number of pages = at a time. =20 I would hope that you could configure the number of pages to be = allocated in blocks as the table grows. =20 =20 _____ =20 From: publicayers@verizon.net [mailto:publicayers@verizon.net]=20 Sent: Friday, February 27, 2009 8:48 PM To: Derby Discussion Subject: Re: inserts slowing down after 2.5m rows =20 I've increased the log size and the checkpoint interval, but it doesn't seem to help. =20 It looks like the inserts begin to dramatically slow down once the table reaches the initial allocation of pages. Things just fly along until it = gets to about 1100 pages (I've allocated an initial 1000 pages, pages are = 32k). =20 Any suggestions on how to keep the inserts moving quickly at this point? =20 Brian =20 On Fri, Feb 27, 2009 at 3:41 PM, publicayers@verizon.net wrote: =20 The application is running on a client machine. I'm not sure how to = tell if there's a different disk available that I could log to.=20 =20 If checkpoint is causing this delay, how to a manage that? Can I turn checkpointing off? I already have durability set to test; I'm not = concerned about recovering from a crashed db.=20 =20 Brian=20 =20 On Fri, Feb 27, 2009 at 9:34 AM, Peter Ondru=B9ka wrote:=20 =20 > Could be checkpoint.. BTW to speed up bulk load you may want to use=20 large log files located separately from data disks.=20 =20 2009/2/27, Brian Peterson < dianeayers@verizon.net >:=20 > I have a big table that gets a lot of inserts. Rows are inserted 10k = at a=20 > time with a table function. At around 2.5 million rows, inserts slow = down=20 > from 2-7s to around 15-20s. The table's dat file is around 800-900M.=20 >=20 >=20 >=20 > I have durability set to "test", table-level locks, a primary key = index and=20 > another 2-column index on the table. Page size is at the max and page cache=20 > set to 4500 pages. The table gets compressed (inplace) every 500,000 = rows. > I'm using Derby 10.4 with JDK 1.6.0_07, running on Windows XP. I've = ruled=20 > out anything from the rest of the application, including GC (memory = usage=20 > follows a consistent pattern during the whole load). It is a local = file=20 > system. The database has a fixed number of tables (so there's a fixed number=20 > of dat files in the database directory the whole time). The logs are getting=20 > cleaned up, so there's only a few dat files in the log directory as = well.=20 >=20 >=20 >=20 > Any ideas what might be causing the big slowdown after so many loads?=20 >=20 >=20 >=20 > Brian=20 >=20 >=20 >=20 >=20 ------=_NextPart_000_000A_01C99936.CF4418C0 Content-Type: text/html; charset="iso-8859-2" Content-Transfer-Encoding: quoted-printable

I thought I read in the documentation that 1000 was the = max initial pages you could allocate, and after that, Derby allocates a page = at a time. Is there some other setting for getting it to allocate more at a = time?

 

Brian

 

From:= Michael = Segel [mailto:msegel@segel.com] On Behalf Of derby@segel.com
Sent: Friday, February 27, 2009 9:59 PM
To: 'Derby Discussion'
Subject: RE: inserts slowing down after 2.5m = rows

 

Ok,

 

For testing, if you allocate 2000 pages, then if my thinking = is ok, then you’ll fly along until you get until 2100 = pages.

 

It sounds like you’re hitting a bit of a snag where = after your initial allocation of pages, Derby is only allocating a smaller = number of pages at a time.

 

I would hope that you could configure the number of pages to = be allocated in blocks as the table grows.

 

 


From:= publicayers@verizon.net [mailto:publicayers@verizon.net]
Sent: Friday, February 27, 2009 8:48 PM
To: Derby Discussion
Subject: Re: inserts slowing down after 2.5m = rows

 

 I've increased the log size and the checkpoint = interval, but it doesn't seem to help.

 

It looks like the inserts begin to dramatically slow down = once the table reaches the initial allocation of pages. Things just fly along = until it gets to about 1100 pages (I've allocated an initial 1000 pages, pages = are 32k).

 

Any suggestions on how to keep the inserts moving quickly = at this point?

 

Brian

 

On Fri, Feb 27, 2009 at  3:41 PM, = publicayers@verizon.net wrote:

 

 The application is running on a client machine. I'm = not sure how to tell if there's a different disk available that I could log to. =

 

If checkpoint is causing this delay, how to a manage that? = Can I turn checkpointing off? I already have durability set to test; I'm not concerned about recovering from a crashed db.

 

Brian

 

On Fri, Feb 27, 2009 at  9:34 AM, Peter Ondru=B9ka = wrote:

 

> Could be checkpoint.. BTW to speed up bulk load you = may want to use

large log files located separately from data disks. =

 

2009/2/27, Brian Peterson < dianeayers@verizon.net >:

> I have a big table that gets a lot of inserts. Rows = are inserted 10k at a

> time with a table function. At around 2.5 million = rows, inserts slow down

> from 2-7s to around 15-20s. The table's dat file is = around 800-900M.

>

>

>

> I have durability set to "test", table-level = locks, a primary key index and

> another 2-column index on the table. Page size is at = the max and page cache

> set to 4500 pages. The table gets compressed (inplace) = every 500,000 rows.

> I'm using Derby 10.4 with JDK 1.6.0_07, running on = Windows XP. I've ruled

> out anything from the rest of the application, = including GC (memory usage

> follows a consistent pattern during the whole load). = It is a local file

> system. The database has a fixed number of tables (so = there's a fixed number

> of dat files in the database directory the whole = time). The logs are getting

> cleaned up, so there's only a few dat files in the log directory as well.

>

>

>

> Any ideas what might be causing the big slowdown after = so many loads?

>

>

>

> Brian

>

>

>

>

------=_NextPart_000_000A_01C99936.CF4418C0--