db-derby-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Kristian Waagan <Kristian.Waa...@Sun.COM>
Subject Re: backup size exploded
Date Mon, 05 Nov 2007 16:59:47 GMT
Fabian Merki wrote:
> hi kristian
>  
> the command finished/aborted after about 80 minutes with the following 
> message:
>  
> Log operation null encounters error writing itself out to the log 
> stream, this could be caused by an errant log operation or internal log 
> buffer full due to excessively large log operation. SQLSTATE: XJ001: 
> Java exception: ': java.io.IOException'.

Hi Fabian,

Hmm, even though your problems occur due to some "fishy" application 
code, it does not seem right that Derby fails in such a way.

Is there any way you can easily reproduce the behavior?
I don't know enough about these things to reckon what the reasons might 
be, but if you could reproduce it and log a Jira there's a chance 
someone will have a look at it.

If it's hard to create a standalone repro, maybe you could turn on 
statement logging in Derby or enable JDBC tracing in your framework if 
possible.


Sorry I can't help more,
-- 
Kristian


> cheers
> fabian
> 
>     ----- Original Message -----
>     *From:* Fabian Merki <mailto:fabian2007@merkisoft.ch>
>     *To:* Derby Discussion <mailto:derby-user@db.apache.org>
>     *Sent:* Wednesday, October 31, 2007 11:23 PM
>     *Subject:* Re: backup size exploded
> 
>     hi kristian
>      
>      > 1. How is the data inserted into your database?
>     i'm using hibernate (autocommit off). i think it tried and failed to
>     insert the same row over and over again (because of my program
>     logic) but the string in one column was too long to be inserted (or
>     so)... could this cause this issue?
>     Caused by: org.apache.derby.client.am.BatchUpdateException:
>     Non-atomic batch failure.  The batch was submitted, but at least one
>     exception occurred on an individual member of the batch. Use
>     getNextException() to retrieve the exceptions for specific batched
>     elements.
>     unfortunately hibernate doesn't use getNextException...
>     but i've just recongized that i do session.beginTransaction() and
>     since it fails (and my code is broken) i only do a session.close()
>     but i don't do a rollback - could this be the reason?
> 
>      > 2. Do you have multiple connections inserting the data cuncurrently?
>     yes, but 99.9% it's only one connection with was inserting into one
>     db. there are multiple dbs with each multiple connections.
> 
>      > 3. Have you tried compressing the table(s)?
>     no because i was scared because of the size (>2gb one table)...
>     i started the database within another parent directory / network
>     server and run the following command:
>      
>     CALL SYSCS_UTIL.SYSCS_INPLACE_COMPRESS_TABLE('APP', 'WEBREQUEST', 1,
>     1, 1);
>     but it did not complete yet (> 30min) - i'll keep running it over
>     night - we'll see *smile*
> 
>      > 4. Have you specified any tuning-properties for the
>     storage-layer/-engine?
>     i'm using the default settings - no config change.
>      
>      > Which operating system are you using?
>     Linux 2.6.16.21-0.25-xen  x86_64
>      
>      
>     thanks!
>      
>     fabian
>      
>      
> 
>         Fabian Merki wrote:
>          > hi all
>          > 
>          > i encountered a very strange problem.
>          > today the backup of a small db was 7.4 gb and it filled up my
>         disk.
>          > 
>          > running "du -s" results in:
>          > 
>          > 105194  backup/2007-10-14 03-09-31/
>          > 105214  backup/2007-10-15 03-10-23/
>          > 105250  backup/2007-10-16 03-09-40/
>          > 105318  backup/2007-10-17 03-09-29/
>          > 202713  backup/2007-10-18 03-09-52/
>          > 370164  backup/2007-10-19 03-10-36/
>          > deleted the other backups in the meantime (space problems!)
>          > if there were that many rows/data in my db i would not write
>         this mail.
>          > the strangest thing of all is that count(*) on one of the
>         problematic
>          > tables is 141'655 while this table has 571'211 pages while
>          > estimspacesaving is 0 (numfreepages=0, numfilledpages=1).
>          > the row-layout is 2 x bigint + 2 x 255varchar - this is much
>         less than 1 kb
>          > the pagesize is 4kb - more than one row should fit in one page
>          > 
>          > i run CALL SYSCS_UTIL.SYSCS_BACKUP_DATABASE(...) every day.
>          > 
>          > can anyone explain why the db started to grow so quickly in
>         size? why
>          > would the numallocpages be more than count(*) - i never
>         delete rows in
>          > this table!?!?
> 
>         Hello Fabian,
> 
>         I don't have an answer to your question, but I have a few
>         questions for
>         you :)
> 
>         1. How is the data inserted into your database?
>         2. Do you have multiple connections inserting the data cuncurrently?
>         3. Have you tried compressing the table(s)?
>         4. Have you specified any tuning-properties for the
>         storage-layer/-engine?
> 
>         These are just a few questions to help us understand what's
>         going on.
>         Hopefully someone will be able to give you a solution to your
>         problem.
>         It would be interesting to see what happens if you try to
>         compress the
>         tables.
> 
>          > 
>          > i'm using db-derby-10.2.2.0-bin and jdk1.5.0_09 (ok, i should
>         update
>          > sometimes...)
> 
>         Which operating system are you using?
> 
>         regards,
>         -- 
>         Kristian
> 
>          > 
>          > thanks for any help
>          > fabian


Mime
View raw message