db-derby-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Dyre.Tjeldv...@Sun.COM
Subject Re: Large multi-record insert performance
Date Fri, 16 Mar 2007 16:37:25 GMT
"Sedillo, Derek (Mission Systems)" <Derek.Sedillo@ngc.com> writes:

> Dyre,
>
> The goal is to find the most efficient/optimized way to insert large
> amounts of data into Derby.  For example in working as an Oracle DBA I
> have discovered that I can bulk load data from ProC using an array of C
> Structures in one insert statement like this:
>
> INSERT INTO SSJ 
> VALUES (:tmp_ssj_data);  // Where tmp_ssj_data is an array (100s or
> 1000s) of structured records 
>
> This approach greatly enhances performance for large data inserts which
> we perform regularly. My original question is how can I do something
> similar with Derby.

Not to my knowledge, no. I believe you can only bulk load from a file
(which I think others have already described).

> While I realize there are 'basic' constructs for performing a task, they
> are not 'normally' optimal.  For example performing 1000 separate
> transactions is less efficient than one.  

Not sure I understand what you mean here. You control how many inserts
you want to do between each commit...


> All in one transaction using executeUpdate():
> 100 tags added in 274ms
> 100 tags removed in 70ms
> 1000 tags added in 299ms
> 1000 tags removed in 250ms
> 10000 tags added in 1605ms
> 10000 tags removed in 1500ms
> 100000 tags added in 14441ms
> 100000 tags removed in 19721ms

100.000 inserts in 15 sec. That's too slow? 
Then I don't have any answers I'm afraid...

-- 
dt


Mime
View raw message