db-derby-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Sai Pullabhotla <sai.pullabho...@jmethods.com>
Subject Re: Derby Statements
Date Fri, 29 Jan 2010 15:18:55 GMT
Ron,

Thanks for the hint. I'll have to try it out. Unfortunately, I cannot
batch the inserts due to our app requirements. But I'm pretty sure I
did try it with as little as 5 threads constantly inserting and
deleting records and was able to produce the lock timeout errors. Our
app does not normally read the rows (select statements) from the same
table. A record is inserted. When it is not needed any more, it is
deleted with "delete where primarykey=?". It seems this is the main
difference between your case and mine. When I get a chance, I will put
together a simple repro code and post it.

Regards,
Sai Pullabhotla





On Fri, Jan 29, 2010 at 7:24 AM, Ronald Rudy <ronchalant@gmail.com> wrote:
> Sai,
>
> When I have large numbers of inserts happening that I can't batch, I end up funneling
them through a method that enforces throttling with a Semaphore, so something like:
>
> // 10 permits, fair set to true
> Semaphore throttle = new Semaphore(10, true);
>
> void insertMethod(/* whatever params I need*/) {
>        throttle.acquire();
>        try {
>                // do insert stuff
>        }
>        finally {
>                throttle.release();
>        }
> }
>
> That way only 10 concurrent inserts at a time will happen (you can adjust this accordingly)
and it will not slam Derby quite so much.  It's crude, but effective if you need to avoid
lock errors.  Perhaps it will help you?  It's always better to batch inserts when you can
though.
>
> My inserts seem to work fine on their own, but when I have the resultset iterating over
the table's contents - a resultset that may be open for a minute or more - with some deletes
happening on items within that resultset it seems the 'lock' issues rear their head with more
frequency than I can afford.
>
> -Ron
>
>
> On Jan 29, 2010, at 8:14:12 AM, Sai Pullabhotla wrote:
>
>> Ron,
>>
>> Even though I do not have a solution for your problem, I just wanted
>> to let the community know I ran into the same issue. Essentially, I've
>> Tomcat connection pool (DBCP) with 20 max connections connecting to
>> Embedded Derby. Under heavy load (or during load testing), inserting
>> and deleting concurrently from the same table resulted in numerous
>> "Could not obtain lock" errors. The lock timeout was set to 1 min.,
>> which I think should be enough. The table is not very huge either. The
>> errors occur randomly both on inserts and deletes. After a while, none
>> of these inserts and deletes seem to work (meaning all of them end up
>> waiting for a lock, and give up after the lock timeout).
>>
>> Regards,
>> Sai Pullabhotla
>>
>>
>>
>>
>>
>> On Fri, Jan 29, 2010 at 6:53 AM, Ronald Rudy <ronchalant@gmail.com> wrote:
>>> I hope you can help me with what's probably a basic question, but one I haven't
found an answer to elsewhere.
>>>
>>> I haven't used Derby extensively, but I do have experience with multithreaded
applications with MySQL.  I've noticed some interesting but surprising behavior with Derby
I was hoping I could get around.
>>>
>>> Typically in my app I have a large number of inserts happening from multiple
threads as the system functions.  I have another thread that runs occasionally updating the
status of these records based on queries to an external source; this update may be a record
update or a delete of the record.
>>>
>>> Initially I had it set up so this "updating" thread was iterating over a scroll
insensitive updatable resultset, deleting/updating as needed. But this resulted in a lot more
blocking than I can afford: the inserts, which happen frequently - up to say 5-8 a second
at times - but nothing that seems it should terribly overburden Derby, would block out the
resultset from even stepping to the first record.  So instead I had it iterating over a read-only
resultset then separately deleting/updating records - with inserts happening from various
other threads all the while.  (Each thread also has its own pooled connection from DBCP)
>>>
>>> What I'm seeing is that under heavy load, I occasionally get "lock could not
be obtained.." exceptions.  I could ramp up the lock timeout, but I feel like this would
be just kicking the problem down the road, and that there shouldn't be any problem with what
I'm doing.
>>>
>>> To this point, all the inserts are under autocommitting connections, and there's
really no way to batch those; they need to be inserted real-time.  I can try batching the
updates/deletes from iterating over the resultset, but I really would prefer (for requirements
of my app) that these be performed real-time.  The only solution I've come up with so far
is to select a "page" of data to update at a time (like 100 records), then close that resultset/statement
and reselect the next "page".  It's not critical that the resultset I iterate over include
all the data that's been inserted since the initial query was ran, though it would be preferable
I suppose.
>>>
>>> Any thoughts on what I might be doing wrong or what I can do to make this work
a little more reliably?  Performance is important, but stability and lower overhead is more
important.
>>>
>>> -Ron
>
>

Mime
View raw message