lucene-solr-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From <>
Subject RE: DIH, Full-Import, DB and Performance.
Date Tue, 01 Jun 2010 15:46:33 GMT
The settings and defaults will depend on which version of SQL Server you are
using and which version of the JDBC driver.

The default for resonseBuffering was changed to adaptive after version 1.2
so unless you are using 1.2 or earlier you don't need to set it to adaptive.

Also if I remember correctly the batchsize will only take affect if you are
using cursors, the default is for all data to be sent to the client
(selectMethod is direct).

Using the default settings for the MS sqljdbc driver caused locking issues
in our database. As soon as the full import started shared locks would be
set on all rows and wouldn't be removed until all the data had been sent,
which for us would be around 30 minutes. During that time no updates could
get an exclusive lock which of course led to huge problems.

Setting selectMethod="cursor" solved the problem for us although it does
slow down the full import.

Another option that worked for us was to not set the selectMethod and set
readOnly="true", but be sure you understand the implications. This causes
all data to be sent to the client (which is the default), giving maximum
performance, and causes no locks to be set which resolves the other issues.
However, this sets transaction isolation to TRANSACTION_READ_UNCOMMITTED
which will cause the select statement to ignore any locks when getting data
so the consistency of the data cannot be guaranteed, which may or may not be
an issue depending on your particular situation.


> -----Original Message-----
> From: stockii []
> Sent: Tuesday, June 01, 2010 7:44 AM
> To:
> Subject: Re: DIH, Full-Import, DB and Performance.
> do you think that the option
> responseBuffer="adaptive"
> should solve my problem ?
> From DIH FAQ ...:
> I'm using DataImportHandler with MS SQL Server database with sqljdbc
> driver.
> DataImportHandler is going out of memory. I tried adjustng the
> batchSize
> values but they don't seem to make any difference. How do I fix this?
> There's a connection property called responseBuffering in the sqljdbc
> driver
> whose default value is "full" which causes the entire result set to be
> fetched. See for
> more
> details. You can set this property to "adaptive" to keep the driver
> from
> getting everything into memory. Connection properties like this can be
> set
> as an attribute (responseBuffering="adaptive") in the dataSource
> configuration OR directly in the jdbc url specified in
> DataImportHandler's
> dataSource configuration.
> --
> View this message in context:
> Full-Import-DB-and-Performance-tp861068p861134.html
> Sent from the Solr - User mailing list archive at

View raw message