db-derby-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Bergquist, Brett" <Br...@canoga.com>
Subject RE: Question/observations on purged rows and insert rates
Date Fri, 06 Aug 2010 18:13:54 GMT
The data is actually coming from test measurements from devices in a network.  For testing
purposes, I was able to change the firmware in a device to generate more tests per second
to up the rate.  So there are more things going on than just database work:

	- thread receiving test measurements from many devices and decoding data
	- passing data off to a Java Resource Connector (JCA Resource Adapter) to an endpoint (Message
Driven Bean)
	- inserting the data

So yes, the inserts are being committed for each measurement.  In the real world, we are seeing
about 8 inserts/second continuous every second of every day.  This is going up however as
more devices are added to the network.  We have gone from about 100K measurements to almost
800K measurements in the last 6 months, so growth is important.

The purging of the data is kicked off by a Quartz Job that purges out measurement results
that are older than 30 days (configurable).  This is done in chunks of 1000 results per transaction
commit to keep the transaction log size down but still have somewhat good performance.  Right
now this takes about 2 hours of time with the inserts continuing to come in.  One problem
is that the deletion is effecting the insertion rate.  The deletion does not really have to
be done "as fast as possible", only that it be done by before the next day.  So I have done
some changes to allow both the deletion to occur without affecting the insertion rate enough
to where it becomes a problem.  This will hold us over for a few months.

In tuning this, I saw this issue that the insertion rate stays low long after the deletion
is over and done with.  I can tell by the CPU usage and the disk I/O usage as I know the pattern
of normal insertion.

While this has worked okay for the last few years, my model is flawed and a better usage will
be partition the data.

The application is running at a customer site and the system is an old Sun V480 running Solaris
10.  4 CPU's and 8Gb of memory.  The system is not CPU bound by any means as it is usually
about 70% idle.  It might start to be I/O bound during the deletion phase as the disks are
not that large or current.  I know I should move the transaction log onto a second disk, but
the up until just recently the complexity of the setup outweighed the increase in performance.
 This is pretty much a headless system that is never looked at as long as it works correctly.
 No sysadmin, no DBA looking at the system.  So up until the recent dramatic increase of the
number of devices in the network, Derby has worked very well.

Again, I appreciate the feedback.  I will look next week to write a standalone Java application
that will simulate this setup and see what I find.

Brett

-----Original Message-----
From: Mike Matrigali [mailto:mikem_app@sbcglobal.net] 
Sent: Friday, August 06, 2010 1:57 PM
To: derby-dev@db.apache.org
Subject: Re: Question/observations on purged rows and insert rates

Bergquist, Brett wrote:

> 
> Also after testing we are seeing the following. With a database with no 
> deleted rows, my test application is inserting about 150 records/second 
> into a table.  I let this run for about 2 million records and the insert 
> rate is consistent.  Now I purge out 1.5 million records and run the 
> test again.   The insert rate is now about 25 records/second.  Running 
> the above compress with the PURGE_ROWS option and rerun the test and 
> still about 25 records/second.  Run full SYSCS_UTIL.SYSCS_COMPRESS_TABLE 
> and rerun the test and the insert rate is back to 150 records/second.
Both of these insert rates seem really slow.  I am going to guess
that you do one commit per insert, is that right?



Mime
View raw message