db-derby-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Sedillo, Derek \(Mission Systems\)" <Derek.Sedi...@ngc.com>
Subject RE: Large multi-record insert performance
Date Wed, 14 Mar 2007 23:06:03 GMT
Actually this will be a part of the application logic.  We have real
time weather data which we constantly receive and insert into the DB.
 
- Derek

________________________________

From: Mamta Satoor [mailto:msatoor@gmail.com] 
Sent: Wednesday, March 14, 2007 5:02 PM
To: Derby Discussion
Subject: Re: Large multi-record insert performance


If this bulk insert is not normal part of application logic and only
done once in a while then I wonder if import using
SYSCS_UTIL.SYSCS_IMPORT _TABLE would be a faster way to load data.
 
Mamta

 
On 3/14/07, Lance J. Andersen <Lance.Andersen@sun.com> wrote: 



	Mike Matrigali wrote:
	>
	>
	> Lance J. Andersen wrote:
	>>
	>>
	>> Even if the backend does not provide optimization for batch
	>> processing, i would hope that there would be still some
efficiency
	>> especially in a networked environment vs building the
strings, 
	>> invoking execute() 1000 times in the amount of data on the
wire...
	>>
	>>
	> I could not tell from the question whether this was network or
not.  I
	> agree in network then limiting execution probably is best.  In
embedded 
	> I am not sure - I would not be surprised if doing 1000 in
batch is
	> slower than just doing the executes.
	>
	> In either case I really would stay away from string
manipulation as much
	> as possible and also stay away from things that create very
long SQL 
	> statements like 1000 term values clauses.
	i agree completely.  Let the driver do the heavy lifting :-)
	



Mime
View raw message