db-derby-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Mamta Satoor" <msat...@gmail.com>
Subject Re: Large multi-record insert performance
Date Wed, 14 Mar 2007 23:02:25 GMT
If this bulk insert is not normal part of application logic and only done
once in a while then I wonder if import using *SYSCS_UTIL.SYSCS_IMPORT_TABLE
*would be a faster way to load data.

Mamta


On 3/14/07, Lance J. Andersen <Lance.Andersen@sun.com> wrote:
>
>
>
> Mike Matrigali wrote:
> >
> >
> > Lance J. Andersen wrote:
> >>
> >>
> >> Even if the backend does not provide optimization for batch
> >> processing, i would hope that there would be still some efficiency
> >> especially in a networked environment vs building the strings,
> >> invoking execute() 1000 times in the amount of data on the wire...
> >>
> >>
> > I could not tell from the question whether this was network or not.  I
> > agree in network then limiting execution probably is best.  In embedded
> > I am not sure - I would not be surprised if doing 1000 in batch is
> > slower than just doing the executes.
> >
> > In either case I really would stay away from string manipulation as much
> > as possible and also stay away from things that create very long SQL
> > statements like 1000 term values clauses.
> i agree completely.  Let the driver do the heavy lifting :-)
>

Mime
View raw message