jackrabbit-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Christopher Elkins (JIRA)" <j...@apache.org>
Subject [jira] Reopened: (JCR-2892) Large fetch sizes have potentially deleterious effects on VM memory requirements when using Oracle
Date Wed, 16 Feb 2011 17:16:24 GMT

     [ https://issues.apache.org/jira/browse/JCR-2892?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Christopher Elkins reopened JCR-2892:
-------------------------------------


I do have a reproducible case in my application, but it is not something I can easily reduce
to a test case suitable for attachment here.

The actual number of rows returned is irrelevant. From the aforelinked PDF,

"Since the buffers are allocated when the SQL is parsed, the size of the buffers depends not
on the actual size of the row data returned by the query, but on the maximum size possible
for the row data. After the SQL is parsed, the type of every column is known and from that
information the driver can compute the maximum amount of memory required to store each column.
The driver also has the fetchSize, the number of rows to retrieve on each fetch. With the
size of each column and the number of rows, the driver can compute the absolute maximum size
of the data returned in a single fetch. That is the size of the buffers."

The specific context is the same as that of JCR-2832, adding a new node to a cluster. However,
any statement that returns a BLOB column given the current fetch size of 10,000 is going to
create large buffers.

> Large fetch sizes have potentially deleterious effects on VM memory requirements when
using Oracle
> --------------------------------------------------------------------------------------------------
>
>                 Key: JCR-2892
>                 URL: https://issues.apache.org/jira/browse/JCR-2892
>             Project: Jackrabbit Content Repository
>          Issue Type: Bug
>          Components: jackrabbit-core, sql
>    Affects Versions: 2.2.2
>         Environment: Oracle 10g+
>            Reporter: Christopher Elkins
>
> Since Release 10g, Oracle JDBC drivers use the fetch size to allocate buffers for caching
row data.
> cf. http://www.oracle.com/technetwork/database/enterprise-edition/memory.pdf
> r1060431 hard-codes the fetch size for all ResultSet-returning statements to 10,000.
This value has significant, potentially deleterious, effects on the heap space required for
even moderately-sized repositories. For example, the BUNDLE table (from 'oracle.ddl') has
two columns -- NODE_ID raw(16) and BUNDLE_DATA blob -- which require 16 b and 4 kb of buffer
space, respectively. This requires a buffer of more than 40 mb [(16+4096) * 10000 = 41120000].
> If the issue described in JCR-2832 is truly specific to PostgreSQL, I think its resolution
should be moved to a PostgreSQL-specific ConnectionHelper subclass. Failing that, there should
be a way to override this hard-coded value in OracleConnectionHelper.

-- 
This message is automatically generated by JIRA.
-
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Mime
View raw message