db-derby-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Andreas Korneliussen (JIRA)" <derby-...@db.apache.org>
Subject [jira] Updated: (DERBY-802) OutofMemory Error when reading large blob when statement type is ResultSet.TYPE_SCROLL_INSENSITIVE
Date Thu, 06 Jul 2006 14:45:31 GMT
     [ http://issues.apache.org/jira/browse/DERBY-802?page=all ]

Andreas Korneliussen updated DERBY-802:
---------------------------------------

    Attachment: derby-802.diff
                derby-802.stat

Attached is a patch which should fix the OutOfMemory problem.

In ScrollInsensitiveResultSet.java and ProjectRestrictResultSet, the cloning has been removed.
When the ScrollInsensitiveResultSet inserts rows to the BackingStoreHashTable, it leaves it
up to the BackingStoreHashTable to do cloning. If the row is too big to go into memory, BackingStoreHashTable
will put it on disk.

BackingStoreHashTable had to be fixed to avoid unneccassry inmemory cloning there as well.

The changes in SQLBinary and its subclasses is to make the method estimateMemoryUsage() return
a number which is at least as big as the memory the column actually will use in memory. Before
this fix, the estimated memory usage for a 64MB blob was on a few bytes ~ 50 bytes.  

Finally, I found that when storing a SQLBinary (SQLBlob) to a conglomerate, the ExecRow which
the column is within cannot be used again when backing the row to another conglomerate (if
the row goes over multiple pages, I think). This is exactly what happens in ScrollInsensitiveResultset.updateRow(..).
To get around this problem, I had do reassign some of the values in the updateRow(..) method
to refer to the data backed to the BackingStoreHashTable.


> OutofMemory Error when reading large blob when statement type is ResultSet.TYPE_SCROLL_INSENSITIVE
> --------------------------------------------------------------------------------------------------
>
>          Key: DERBY-802
>          URL: http://issues.apache.org/jira/browse/DERBY-802
>      Project: Derby
>         Type: Bug

>   Components: JDBC
>     Versions: 10.0.2.0, 10.0.2.1, 10.0.2.2, 10.1.1.0, 10.2.0.0, 10.1.2.0, 10.1.1.1, 10.1.1.2,
10.1.2.1, 10.1.3.0, 10.1.2.2
>  Environment: all
>     Reporter: Sunitha Kambhampati
>     Assignee: Andreas Korneliussen
>     Priority: Minor
>  Attachments: derby-802.diff, derby-802.stat
>
> Grégoire Dubois on the list reported this problem.  From his mail: the reproduction
is attached below. 
> When statement type is set to ResultSet.TYPE_SCROLL_INSENSITIVE, outofmemory exception
is thrown when reading large blobs. 
> import java.sql.*;
> import java.io.*;
> /**
> *
> * @author greg
> */
> public class derby_filewrite_fileread {
>    
>     private static File file = new File("/mnt/BigDisk/Clips/BabyMamaDrama-JShin.wmv");
>     private static File destinationFile = new File("/home/greg/DerbyDatabase/"+file.getName());
>    
>     /** Creates a new instance of derby_filewrite_fileread */
>     public derby_filewrite_fileread() {       
>        
>     }
>    
>     public static void main(String args[]) {
>         try {
>             Class.forName("org.apache.derby.jdbc.EmbeddedDriver").newInstance();
>             Connection connection = DriverManager.getConnection ("jdbc:derby:/home/greg/DerbyDatabase/BigFileTestDB;create=true",
"APP", "");
>             connection.setAutoCommit(false);
>            
>             Statement statement = connection.createStatement(ResultSet.TYPE_SCROLL_INSENSITIVE,
ResultSet.CONCUR_READ_ONLY);
>             ResultSet result = statement.executeQuery("SELECT TABLENAME FROM SYS.SYSTABLES");
>            
>             // Create table if it doesn't already exists.
>             boolean exist=false;
>             while ( result.next() ) {
>                 if ("db_file".equalsIgnoreCase(result.getString(1)))
>                     exist=true;
>             }
>             if ( !exist ) {
>                 System.out.println("Create table db_file.");
>                 statement.execute("CREATE TABLE db_file ("+
>                                            "     name          VARCHAR(40),"+
>                                            "     file          BLOB(2G) NOT NULL)");
>                 connection.commit();
>             }
>            
>             // Read file from disk, write on DB.
>             System.out.println("1 - Read file from disk, write on DB.");
>             PreparedStatement preparedStatement=connection.prepareStatement("INSERT INTO
db_file(name,file) VALUES (?,?)");
>             FileInputStream fileInputStream = new FileInputStream(file);
>             preparedStatement.setString(1, file.getName());
>             preparedStatement.setBinaryStream(2, fileInputStream, (int)file.length());
          
>             preparedStatement.execute();
>             connection.commit();
>             System.out.println("2 - END OF Read file from disk, write on DB.");
>            
>            
>             // Read file from DB, and write on disk.
>             System.out.println("3 - Read file from DB, and write on disk.");
>             result = statement.executeQuery("SELECT file FROM db_file WHERE name='"+file.getName()+"'");
>             byte[] buffer = new byte [1024];
>             result.next();
>             BufferedInputStream     inputStream=new BufferedInputStream(result.getBinaryStream(1),1024);
>             FileOutputStream outputStream = new FileOutputStream(destinationFile);
>             int readBytes = 0;
>             while (readBytes!=-1) {
>                 readBytes=inputStream.read(buffer,0,buffer.length);
>                 if ( readBytes != -1 )
>                     outputStream.write(buffer, 0, readBytes);
>             }     
>             inputStream.close();
>             outputStream.close();
>             System.out.println("4 - END OF Read file from DB, and write on disk.");
>         }
>         catch (Exception e) {
>             e.printStackTrace(System.err);
>         }
>     }
> }
> It returns
> 1 - Read file from disk, write on DB.
> 2 - END OF Read file from disk, write on DB.
> 3 - Read file from DB, and write on disk.
> java.lang.OutOfMemoryError
> if the file is ~10MB or more

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira


Mime
View raw message