db-derby-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Bryan Pendleton (JIRA)" <derby-...@db.apache.org>
Subject [jira] Updated: (DERBY-428) NetworkClient PreparedStatement.executeBatch() hangs if batch is too large (ArrayIndexOutOfBoundsException in Network Server)
Date Sun, 19 Feb 2006 22:15:52 GMT
     [ http://issues.apache.org/jira/browse/DERBY-428?page=all ]

Bryan Pendleton updated DERBY-428:
----------------------------------

    Attachment: b428.java
                derby-428.diff

Attached is a standalone test program, b428.java, for experimenting with the bug, and a patch
proposal, derby-428.diff.

The patch contains a server-side change, a client-side change, and a regression test.

The server-side change is to call ensureLength() in DDMWriter.startDDM(). The DDMWriter working
buffer is designed to dynamically grow to accomodate the data being written; this dynamic
growth is implemented using a coding rule which requires that all DDMWriter internal routines
must call ensureLength to communicate the buffer size requirements of that routine prior to
writing bytes into the buffer. StartDDM was missing the call to ensureLength. It was just
luck that this hadn't caused any problems in the past; this particular bug exposed the problem
in startDDM by causing the server to write a tremendous number of very small DDM records in
a single correlated chain, which meant that eventually (around batch element 9000), startDDM
tried to write past the end of the buffer without calling ensureLength first. Simple change,
even if my explanation is not so clear :)

The client-side change is due to the fact that DRDA imposes a hard limit of 65535 elements
in a single correlated request because the correlation identifier is a two byte unsigned integer.
Without this change, what happens is that the correlation identifier wraps around when we
go to write the 65536th element in the batch, and we start breaking DRDA protocol rules since
DRDA requires that the correlation IDs in a single request be always increasing. The change
in this patch proposal causes the client to throw an exception if it is asked to execute a
batch containing more than 65534 elements. The reason for the number 65534, rather than 65535,
is that the value 0xFFFF seems to be reserved for some special purpose.

Experimenting with the JCC driver, I discovered that it seems to reserve more than just 0xFFFF,
but also 0xFFFE and 0xFFFD as special values; the largest number of elements that I could
succcessfully execute in a single batch with the JCC driver is 65532. I don't know what is
going on with those special values, unfortunately.

The regression test verifies that we can successfully execute a batch containing 65532 elements
with both the Network Client and JCC drivers. The test also verifies that, if we are using
the Network Client, then we get the expected exception if we try to execute a batch with more
than 65534 elements.

Comments, suggestions, and feedback are welcome!



> NetworkClient PreparedStatement.executeBatch() hangs if batch is too large (ArrayIndexOutOfBoundsException
in Network Server)
> -----------------------------------------------------------------------------------------------------------------------------
>
>          Key: DERBY-428
>          URL: http://issues.apache.org/jira/browse/DERBY-428
>      Project: Derby
>         Type: Bug
>   Components: Network Client
>  Environment: Linux atum01 2.4.20-31.9 #1 Tue Apr 13 18:04:23 EDT 2004 i686 i686 i386
GNU/Linux
> Java(TM) 2 Runtime Environment, Standard Edition (build 1.5.0_03-b07)
> Java HotSpot(TM) Client VM (build 1.5.0_03-b07, mixed mode, sharing)
>     Reporter: Bernt M. Johnsen
>     Assignee: Bryan Pendleton
>  Attachments: b428.java, derby-428.diff
>
> When running
>         s.executeUpdate("create table t (i integer)");
>         PreparedStatement p = c.prepareStatement("insert into t values(?)");
>         for (int i=0; i<N; i++) {
>             p.setInt(1,i);
>             p.addBatch();
>         }
>         System.out.println("Ok");
>         p.executeBatch();
> If  N is 9000
> The server reports:
> 524272
> java.lang.ArrayIndexOutOfBoundsException: 524272
>         at org.apache.derby.impl.drda.DDMWriter.startDdm(DDMWriter.java:315)
>         at org.apache.derby.impl.drda.DRDAConnThread.writeSQLCARD(DRDAConnThread.java:4937)
>         at org.apache.derby.impl.drda.DRDAConnThread.writeSQLCARDs(DRDAConnThread.java:4898)
>         at org.apache.derby.impl.drda.DRDAConnThread.writeSQLCARDs(DRDAConnThread.java:4888)
>         at org.apache.derby.impl.drda.DRDAConnThread.checkWarning(DRDAConnThread.java:7239)
>         at org.apache.derby.impl.drda.DRDAConnThread.parseEXCSQLSTT(DRDAConnThread.java:3605)
>         at org.apache.derby.impl.drda.DRDAConnThread.processCommands(DRDAConnThread.java:859)
>         at org.apache.derby.impl.drda.DRDAConnThread.run(DRDAConnThread.java:214)
> agentThread[DRDAConnThread_3,5,main]
> While the client hangs in executeBatch().
> If N is 8000, the client gets the following Exception:
> Exception in thread "main" org.apache.derby.client.am.BatchUpdateException: Non-atomic
batch failure.  The batch was submitted, but at least one exception occurred on an individual
member of the batch. Use getNextException() to retrieve the exceptions for specific batched
elements.
>         at org.apache.derby.client.am.Agent.endBatchedReadChain(Agent.java:267)
>         at org.apache.derby.client.am.PreparedStatement.executeBatchRequestX(PreparedStatement.java:1596)
>         at org.apache.derby.client.am.PreparedStatement.executeBatchX(PreparedStatement.java:1467)
>         at org.apache.derby.client.am.PreparedStatement.executeBatch(PreparedStatement.java:945)
>         at AOIB.main(AOIB.java:24)

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira


Mime
View raw message