db-derby-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From ganest <gan...@ced.tuc.gr>
Subject Performance Tuning Problem ?
Date Thu, 21 May 2009 12:04:24 GMT

Hi all,

I am trying to evaluate Derby (version 10.5.1.1 on linux machine, with java
version "1.6.0_07"), 
using the embedded driver but I am having difficulty getting the performance
of Derby to get close to 
mysql running on the same hardware.

Any suggestions about what I can do to close the gap would be appreciated. 

The situation is as follows:

I have created two tables (I attach at the end the full DDL file produced by
dblook): 

big(int id, int num, varchar (256) name)  
bigref(int id, int bigid, varchar(256) name) 

and I inserted 10000000 tuples in the table big and 1000000 tuples in table
bigref. 

The data inserted using the following code:

// Issue the INSERT operation for table big.
PreparedStatement is = con.prepareStatement("INSERT INTO big (num, name)
VALUES(?, ?)" );
    
Random r = new Random();
for (int i = 0; i < 10000000; ++i ) {               
	is.setInt(1, r.nextInt(1000000));
    is.setString(2, getRandom() + " " + getRandom() );
              
    is.execute();
}

// Issue the INSERT operation for table bigref.
PreparedStatement is = con.prepareStatement("INSERT INTO bigref (bigid,
name) VALUES(?, ?)" );

Random r = new Random();
for (int i = 0; i < 1000000; ++i ) {               
   is.setInt(1, r.nextInt(10000000));
   is.setString(2, getRandom() + " " + getRandom() );
      
   is.execute();
}

String getRandom() {
	UUID uuid = UUID.randomUUID();
    String myRandom = uuid.toString();
	return myRandom;
}

Inticative tuple of table big:
ID			NUM			NAME
3569699    |766673     |0ffa0185-1270-43d4-ba59-8fa0d6bb84d1
38ce4fd0-d2ec-4f00-8fe7-f70325344f50

Inticative tuple of table bigref:
ID			BIGID		NAME
154507     |3569699    |e9ad27c0-f322-4f0e-b5a6-894eda12dd7b
08a8b797-bd5a-4f0c-a4c3-4fa1d056ce80

I created the indexes after data insertion and I run (I read about it this
mailing list) :
call SYSCS_UTIL.SYSCS_COMPRESS_TABLE('GANEST', 'BIG', 1)
call SYSCS_UTIL.SYSCS_COMPRESS_TABLE('GANEST', 'BIGREF', 1)

I set the database properties:
CALL SYSCS_UTIL.SYSCS_SET_DATABASE_PROPERTY('derby.storage.pageSize',
'32768');
CALL SYSCS_UTIL.SYSCS_SET_DATABASE_PROPERTY('derby.storage.pageCacheSize',
'4000');

I am executing using ij tool (java -Dderby.language.maxMemoryPerTable=4096
-Xms256m -Xmx256m -jar $DERBY_HOME/lib/derbyrun.jar ij) 
the following query: (I read about derby.language.maxMemoryPerTable in this
mailing list)

select count(*) from  big inner join bigref on big.id=bigref.bigid and
big.name like '0ff%';

The result is: 258 and it takes more than 20 seconds to be executed. Using
mysql with the same
configuration the result is produced in milliseconds.

It seems that the indexes are taken into account and the query plan is
correct.

Any idea about what is going wrong would be appreciated. 

Thanks in advance.

===================================================================================

VALUES SYSCS_UTIL.SYSCS_GET_RUNTIMESTATISTICS();

Statement Name: 
	null
Statement Text: 
	select count(*) from  big inner join bigref on big.id=bigref.bigid and
big.name like '0ff%'
Parse Time: 0
Bind Time: 0
Optimize Time: 0
Generate Time: 0
Compile Time: 0
Execute Time: 0
Begin Compilation Timestamp : null
End Compilation Timestamp : null
Begin Execution Timestamp : null
End Execution Timestamp : null
Statement Execution Plan Text: 
Project-Restrict ResultSet (7):
Number of opens = 1
Rows seen = 1
Rows filtered = 0
restriction = false
projection = true
	constructor time (milliseconds) = 0
	open time (milliseconds) = 0
	next time (milliseconds) = 0
	close time (milliseconds) = 0
	restriction time (milliseconds) = 0
	projection time (milliseconds) = 0
	optimizer estimated row count:            1.00
	optimizer estimated cost:        17576.58

Source result set:
	Scalar Aggregate ResultSet:
	Number of opens = 1
	Rows input = 258
		constructor time (milliseconds) = 0
		open time (milliseconds) = 0
		next time (milliseconds) = 0
		close time (milliseconds) = 0
		optimizer estimated row count:         2522.46
		optimizer estimated cost:        17576.58

	Index Key Optimization = false
	Source result set:
		Project-Restrict ResultSet (6):
		Number of opens = 1
		Rows seen = 258
		Rows filtered = 0
		restriction = false
		projection = true
			constructor time (milliseconds) = 0
			open time (milliseconds) = 0
			next time (milliseconds) = 0
			close time (milliseconds) = 0
			restriction time (milliseconds) = 0
			projection time (milliseconds) = 0
			optimizer estimated row count:         2522.46
			optimizer estimated cost:        17576.58

		Source result set:
			Nested Loop Join ResultSet:
			Number of opens = 1
			Rows seen from the left = 2407
			Rows seen from the right = 258
			Rows filtered = 0
			Rows returned = 258
				constructor time (milliseconds) = 0
				open time (milliseconds) = 0
				next time (milliseconds) = 0
				close time (milliseconds) = 0
				optimizer estimated row count:         2522.46
				optimizer estimated cost:        17576.58

			Left result set:
				Index Row to Base Row ResultSet for BIG:
				Number of opens = 1
				Rows seen = 2407
				Columns accessed from heap = {0, 2}
					constructor time (milliseconds) = 0
					open time (milliseconds) = 0
					next time (milliseconds) = 0
					close time (milliseconds) = 0
					optimizer estimated row count:         2400.00
					optimizer estimated cost:         5717.82

					Index Scan ResultSet for BIG using index BIGNAME at read committed
isolation level using instantaneous share row locking chosen by the
optimizer
					Number of opens = 1
					Rows seen = 2407
					Rows filtered = 0
					Fetch Size = 16
						constructor time (milliseconds) = 0
						open time (milliseconds) = 0
						next time (milliseconds) = 0
						close time (milliseconds) = 0
						next time in milliseconds/row = 0

					scan information: 
						Bit set of columns fetched=All
						Number of columns fetched=2
						Number of deleted rows visited=0
						Number of pages visited=10
						Number of rows qualified=2407
						Number of rows visited=2408
						Scan type=btree
						Tree height=-1
						start position: 
	>= on first 1 column(s).
	Ordered null semantics on the following columns: 
0 
						stop position: 
	>= on first 1 column(s).
	Ordered null semantics on the following columns: 
0 
						qualifiers:
None
						optimizer estimated row count:         2400.00
						optimizer estimated cost:         5717.82


			Right result set:
				Index Scan ResultSet for BIGREF using constraint SQL090517120557051 at
read committed isolation level using instantaneous share row locking chosen
by the optimizer
				Number of opens = 2407
				Rows seen = 258
				Rows filtered = 0
				Fetch Size = 16
					constructor time (milliseconds) = 0
					open time (milliseconds) = 0
					next time (milliseconds) = 0
					close time (milliseconds) = 0
					next time in milliseconds/row = 0

				scan information: 
					Bit set of columns fetched={0}
					Number of columns fetched=1
					Number of deleted rows visited=0
					Number of pages visited=7231
					Number of rows qualified=258
					Number of rows visited=2665
					Scan type=btree
					Tree height=3
					start position: 
	>= on first 1 column(s).
	Ordered null semantics on the following columns: 
0 
					stop position: 
	> on first 1 column(s).
	Ordered null semantics on the following columns: 
0 
					qualifiers:
None
					optimizer estimated row count:         2522.46
					optimizer estimated cost:        11858.76

=============================================================================

-- This file was created using Derby's dblook utility.
-- Timestamp: 2009-05-21 14:05:58.306
-- Source database is: /home/ganest/tmp/derbyDBdir/test1
-- Connection URL is: jdbc:derby:/home/ganest/tmp/derbyDBdir/test1...
-- Specified schema is: ganest
-- appendLogs: false

-- ----------------------------------------------
-- DDL Statements for schemas
-- ----------------------------------------------

CREATE SCHEMA "GANEST";

-- ----------------------------------------------
-- DDL Statements for tables
-- ----------------------------------------------


CREATE TABLE "GANEST"."BIGREF" ("ID" INTEGER NOT NULL GENERATED ALWAYS AS
IDENTITY (START WITH 1, INCREMENT BY 1), "BIGID" INTEGER NOT NULL, "NAME"
VARCHAR(256) NOT NULL);

CREATE TABLE "GANEST"."BIG" ("ID" INTEGER NOT NULL GENERATED ALWAYS AS
IDENTITY (START WITH 1, INCREMENT BY 1), "NUM" INTEGER NOT NULL, "NAME"
VARCHAR(256) NOT NULL);


-- ----------------------------------------------
-- DDL Statements for indexes
-- ----------------------------------------------

CREATE INDEX "GANEST"."BIGNAME" ON "GANEST"."BIG" ("NAME");

CREATE INDEX "GANEST"."BIGREFNAME" ON "GANEST"."BIGREF" ("NAME");

-- ----------------------------------------------
-- DDL Statements for keys
-- ----------------------------------------------

-- primary/unique

ALTER TABLE "GANEST"."BIG" ADD CONSTRAINT "SQL090516111642550" PRIMARY KEY
("ID");

ALTER TABLE "GANEST"."BIGREF" ADD CONSTRAINT "SQL090517120557050" PRIMARY
KEY ("ID");


-- foreign
ALTER TABLE "GANEST"."BIGREF" ADD CONSTRAINT "SQL090517120557051" FOREIGN
KEY ("BIGID") REFERENCES "GANEST"."BIG" ("ID") ON DELETE NO ACTION ON UPDATE
NO ACTION;

-- 
View this message in context: http://www.nabble.com/Performance-Tuning-Problem---tp23652098p23652098.html
Sent from the Apache Derby Users mailing list archive at Nabble.com.


Mime
View raw message