db-derby-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Sunitha Kambhampati <ksunitha...@gmail.com>
Subject Re: [jira] Updated: (DERBY-959) Allow use of DRDA QRYDTA block sizes greater than 32K
Date Thu, 15 Jun 2006 18:51:48 GMT

Bryan Pendleton wrote:

> The traces look good to me, although of course it is always hard to 
> read traces,
> it's just a fact of life.

True, traces are hard to read. Thanks very much for going through them 
Bryan.

> You mentioned that this is a server-only patch at this point; have you 
> started
> to think about what we should do to the client to take advantage of this?


I concentrated only at the server changes.

The client has a hardcoded length of the blocksize to DSS max size = 
32767.  So, I just tried one simple test really quickly. Changed the 
length to 65535 and ran a simple select and it went fine. First look at 
the trace seemed ok with respect to continuation headers.  So maybe the 
client changes could also be straighforward,  I think  lots more testing 
is required.  

.NetStatementRequest.java:
     void buildQRYBLKSZ() throws SqlException {
-        writeScalar4Bytes(CodePoint.QRYBLKSZ, DssConstants.MAX_DSS_LEN);
+        writeScalar4Bytes(CodePoint.QRYBLKSZ,65535);
     }

>
> Have you thought about what a "good" value might be for the server? 

I think another way to put it is , what should be the 'good' value of 
qryblksz that the client must send to the server.   Do you agree?

So I think -- The server max size for blocksize should be 10M because 
that is the limit that is allowed by the spec.  From my understanding of 
the spec, the server does not get to choose the blocksize for QRYDTA. It 
is the client which sends the qryblksz.   Any value from 512 to 10M is a 
valid query block size value per the spec and I think it is good and 
safe to have our server accept those values since the server knows how 
to already.  That way, drda compliant clients, like the C client that 
talks to our server will work OK when sending a valid queryblock size 
value ( in this case 65535).

> It seems like
> there might be reasons that a value higher than 32K, but lower than 10 
> Meg, turns
> out to offer the best tradeoffs of performance, memory usage, etc. I 
> imagine that
> we would want to run some benchmarks to try to investigate the 
> buffering behaviors
> and where the interesting points are in the performance of the network 
> code. What
> sort of measurements would be revealing? 

Good point.  Thats right, I agree we need to run some benchmarks to help 
us come up with a "good" value of qryblksz. I have not given much 
thought to this yet.  We  need to take care of the memory usage, since 
we pack the rows into the in-memory buffer of size of  atleast qryblksz. 
Also since the DSS max size is 32k always,  the bigger the query 
blocksize, the more bytes will need to be shifted to add the 
continuation headers.   So maybe selects with rows that will fill up the 
query blocksize would be one test which might show the overhead involved 
in shifting/copying large amount of bytes.  It will be good to 
investigate what other areas in our network code could be improved as 
well. Dont know much about the tcp level limits etc and how they will 
affect the packet sizes, have to learn.  But googling, I found this 
http://www.psc.edu/networking/projects/tcptune/

As Mike suggested,  I'll open a subtask for perf analysis and the client 
side changes for DERBY-959.  

I wont be able to spend much time on this currently but hope someone 
else may be interested in picking it up.

Thanks,
Sunitha.

Mime
View raw message