[ https://issues.apache.org/jira/browse/JCR-2892?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Jukka Zitting updated JCR-2892:
-------------------------------
Fix Version/s: (was: 2.3.1)
> Large fetch sizes have potentially deleterious effects on VM memory requirements when
using Oracle
> --------------------------------------------------------------------------------------------------
>
> Key: JCR-2892
> URL: https://issues.apache.org/jira/browse/JCR-2892
> Project: Jackrabbit Content Repository
> Issue Type: Bug
> Components: jackrabbit-core, sql
> Affects Versions: 2.2.2
> Environment: Oracle 10g+
> Reporter: Christopher Elkins
> Assignee: Claus Köll
> Fix For: 2.2.10
>
> Attachments: JCR-2892.patch, oracleFetchSize.patch
>
>
> Since Release 10g, Oracle JDBC drivers use the fetch size to allocate buffers for caching
row data.
> cf. http://www.oracle.com/technetwork/database/enterprise-edition/memory.pdf
> r1060431 hard-codes the fetch size for all ResultSet-returning statements to 10,000.
This value has significant, potentially deleterious, effects on the heap space required for
even moderately-sized repositories. For example, the BUNDLE table (from 'oracle.ddl') has
two columns -- NODE_ID raw(16) and BUNDLE_DATA blob -- which require 16 b and 4 kb of buffer
space, respectively. This requires a buffer of more than 40 mb [(16+4096) * 10000 = 41120000].
> If the issue described in JCR-2832 is truly specific to PostgreSQL, I think its resolution
should be moved to a PostgreSQL-specific ConnectionHelper subclass. Failing that, there should
be a way to override this hard-coded value in OracleConnectionHelper.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira
|