db-derby-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Ture Munter <ture.mun...@fysik.dtu.dk>
Subject Re: Inserting data into a database on a Derby network server
Date Mon, 08 Oct 2007 23:04:17 GMT
Kristian Waagan <Kristian.Waagan@...> writes:

> As Bryan says, you should close your statements. Even better would be to 
> use a PreparedStatement for all you queries (including those without any 
>   variables).
> Unless it is an application requirement, you could also do with just one 
> table and use the (Prepared)Statement.getGeneratedKeys() if you need to 
> obtain the unique identifier after insertion. Something like this:
> Don't know how important it is anymore (with newer Java versions), but 
> you might see a little improvement by using 'Float.valueOf(strings[3])' 
> instead of 'new Float(strings[3].floatValue())'.

Thank you for your suggestions, I'll keep them in mind when I work on other
programs. The two tables are an application requirement, there are potentially
many tables containing data (now only around 15), so we need to know where the
data with a specific unique id ended up. It is certainly a better idea to use
the built-in methods to obtain the last auto-generated key. Maybe the slowdown
observed is caused by more and more rows needing to be sorted(?). I will try
tomorrow to rewrite parts of it to use the Java-methods to obtain the
auto-generated keys.

> Further, you should not have to reconnect after a few hundred 
> insertions. If you have to, it probably means one out of two things; the 
> application code is not optimal, or there is a bug in Derby.
> In this case, I *guess* that not closing the statements caused the heap 
> to fill up. 

See my other post to Bryan. I agree with you.

> Also, the reduced insertion rate can easily be observed with the repro.
> The number of rows is accumulated, the duration is not (i.e. the 
> durations printed are all for inserting 5000 rows). The numbers below 
> are from a run where a commit is done every 5000 rows, which turned out 
> to be lightly worse than every 50 rows (clocked in at 28m36s):
> Connecting to the database
> As can be seen, the time it takes to insert 5000 rows rises from 23 
> seconds at startup to over nearly 300 seconds (5 minutes). This has to 
> be investigated as well. I'll see if I can have a look soon, but anyone 
> else is free to check it out. If I get around to it, I'll modify the 
> script slightly and create a Jira.
> Thanks Ture for reporting this.
> regards,

Thanks Kristian and Bryan for so quickly helping me out with the problem.



View raw message