openjpa-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Roytman, Alex" <Roytm...@peacetech.com>
Subject RE: Why would Kodo refuse to batch inserts in certain tables? Big performance drop migrating to Kodo 4.1
Date Tue, 17 Oct 2006 21:20:09 GMT
Here are some fixed bugs with batching:

Bug 1347809  Corrupt data possible using setNull(n,java.sql.Types.DATE)
with batch executes
 This note gives a brief overview of bug 1347809. 

Affects:
Product (Component) JDBC (OCI JDBC driver) 
Range of versions believed to be affected Versions < 9.2  
Versions confirmed as being affected 9.0.1.2 
 
Platforms affected Generic (all / most platforms affected) 

Fixed:
This issue is fixed in 9.0.1.3 (Server Patch Set) 
9.2.0.1 (Base Release) 
 

Symptoms:
Corruption (Logical) 
Related To:
JDBC 
Description


Using the PreparedStatement's setNull() method to set
DATE fields to NULL with batch execution can result
in data corruption.
eg: Using pstmt.setNull(6, java.sql.Types.DATE) with
    pstmt.setExecuteBatch(100) can lead to data corruption.

Workaround:
  Use pstmt.setString(6,null) instead.

========================================================================
===
Bug 4390125  JDBC update count is wrong with batching
 This note gives a brief overview of bug 4390125. 

Affects:
Product (Component) JDBC (Oci) 
Range of versions believed to be affected Versions < 11  
Versions confirmed as being affected 9.2.0.6 
10.1.0.4 
10.2.0.1 
 
Platforms affected Generic (all / most platforms affected) 

Fixed:
This issue is fixed in 9.2.0.8 (Server Patch Set) 
10.1.0.5 (Server Patch Set) 
10.2.0.2 (Server Patch Set) 
11g (Future version) 
 

Symptoms: Related To: 
Wrong Results 
 JDBC 
 

Description
When using batching in JDBC clients the update count may be wrong if
the type is changed.

eg:
  ExecuteBatchSize (3)
  Insert (11, "test11")
  Insert (22, new StringReader ("test22"), 10)
  sendBatch,
  ^
  The update count may show as 1 

Workaround: 
  Do not change types during Batching.
========================================================================
===

TIP:  Click help for a detailed explanation of this page. 
 Bookmark Go to End 

Subject:  Bug 4183195 - Calling commit during "standard batching"
incorrectly discards pending batches 
  Doc ID:  Note:4183195.8 Type:  PATCH 
  Last Revision Date:  03-OCT-2005 Status:  PUBLISHED 
 Click here for details of sections in this note.

Bug 4183195  Calling commit during "standard batching" incorrectly
discards pending batches
 This note gives a brief overview of bug 4183195. 

Affects:
Product (Component) JDBC (JDBC for Java) 
Range of versions believed to be affected Versions < 10.2  
Versions confirmed as being affected 10.1.0.4 
 
Platforms affected Generic (all / most platforms affected) 

Fixed:
This issue is fixed in 10.1.0.5 (Server Patch Set) 
10.2.0.1 (Base Release) 
 

Symptoms: Related To: 
Corruption (Logical) 
 JDBC 
 

Description
JDBC has incorrect resetting of the batch-buffer with standard 
JDBC batching upon commit().

eg:
  Using standard JDBC-batch style, do an update and addBatch(), 
  call commit(). 
  Do a 2nd update and addBatch(), executeBatch(), and commit() again.
  ^
  Only the 2nd update appears in the database.

Workaround:
  Do not call commit() before calling executeBatch().


========================================================================
===

Bug 4177549  Hang when executing a JDBC batch with large stream bind
 This note gives a brief overview of bug 4177549. 

Affects:
Product (Component) JDBC (JDBC for Java) 
Range of versions believed to be affected Versions < 10.2  
Versions confirmed as being affected 9.2.0.6 
10.1.0.4 
 
Platforms affected Generic (all / most platforms affected) 

Fixed:
This issue is fixed in 10.1.0.5 (Server Patch Set) 
10.2.0.1 (Base Release) 
 

Symptoms: Related To: 
Hang (Process Hang) 
 JDBC 
 

Description
Binding a smaller sized stream followed by larger sized stream
can lead to a hang in a JDBC client.

eg:
 Do Jdbc batching
 make sure that the batching happens in two iterations and the latter is
a stream bind.
 eg: binding stream of 2K & a stream of 36K.
     similar to:
           String sql = "INSERT INTO test (aClob) VALUES (?)";
           stmt = conn.prepareStatement(sql);
           stmt.setCharacterStream(1, createClobData(3999, 'a'), 3999);
           stmt.addBatch(); 
           stmt.setCharacterStream(1, createClobData(4001, 'b'), 4001);
           stmt.addBatch();
           stmt.executeBatch(); 


Workaround: 
  Bind all binds >= 32k before binds for < 32k.

========================================================================
===
TIP:  Click help for a detailed explanation of this page. 
 Bookmark Go to End 

Subject:  Bug 4044730 - commit using oracle JDBC update batching may
cause data loss 
  Doc ID:  Note:4044730.8 Type:  PATCH 
  Last Revision Date:  21-FEB-2006 Status:  PUBLISHED 
 Click here for details of sections in this note.

Bug 4044730  commit using oracle JDBC update batching may cause data
loss
 This note gives a brief overview of bug 4044730. 

Affects:
Product (Component) JDBC (JDBC for Java) 
Range of versions believed to be affected Versions < 10.2  
Versions confirmed as being affected 10.1.0.4 
 
Platforms affected Generic (all / most platforms affected) 

Fixed:
This issue is fixed in 9.2.0.8 (Server Patch Set) 
10.1.0.5 (Server Patch Set) 
10.2.0.1 (Base Release) 
 

Symptoms: Related To: 
Corruption (Logical) 
 JDBC 
 

Description
Commit using oracle JDBC update batching may cause data loss or
report an error.


eg:
  Prepare a Statement for a INSERT SQL string with static binds.
  Set execute batch as 5
  do executeQuery 7 times.
  ^
  thin: Only 2 records inserted
  oci8: "invalid iteration count 0 " 

Workaround: 
  Do not use batching.
 or
  Use batching only when there are binds.


-----Original Message-----
From: Marc Prud'hommeaux [mailto:mprudhomapache@gmail.com] On Behalf Of
Marc Prud'hommeaux
Sent: Tuesday, October 17, 2006 4:57 PM
To: open-jpa-dev@incubator.apache.org
Subject: Re: Why would Kodo refuse to batch inserts in certain tables?
Big performance drop migrating to Kodo 4.1

Alex-

Since the non-batched statement contains a Date field, this is due to  
the workaround for the Oracle JDBC driver bug I mentioned before.

Kodo will not batch statements that contains a Date field when  
interacting with Oracle.


Mime
View raw message