db-derby-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Knut Anders Hatlen (JIRA)" <j...@apache.org>
Subject [jira] Commented: (DERBY-3734) Maximum value allowed for derby.storage.fileCacheSize (100) is too low for large system. Increase the maximum value and redocument the property.
Date Fri, 20 Jun 2008 23:25:44 GMT

    [ https://issues.apache.org/jira/browse/DERBY-3734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12606931#action_12606931
] 

Knut Anders Hatlen commented on DERBY-3734:
-------------------------------------------

+1

I don't think there's any need to limit the maximum value for this setting. On some systems,
you may exceed the maximum allowed number of open files if you increase the size of the container
cache too much, but that's not something we need to worry about if we leave the default as
it is today.

> Maximum value allowed for derby.storage.fileCacheSize (100) is too low for large system.
 Increase the maximum value and redocument the property.
> -------------------------------------------------------------------------------------------------------------------------------------------------
>
>                 Key: DERBY-3734
>                 URL: https://issues.apache.org/jira/browse/DERBY-3734
>             Project: Derby
>          Issue Type: Bug
>          Components: Performance
>    Affects Versions: 10.3.3.0
>         Environment: Derby 10.3.3
>            Reporter: Stan Bradbury
>
> Increasing the value of the undocumented property:  derby.storage.fileCacheSize improves
Derby performance in our system but the maximum allowed value (100) is not large enough to
accommodate our system.  Our performance engineer reports:
> The following stack shows items being evicted from Derby's container cache.  Extra debug
code showed that the cache was at its maximum size (100), and that about 1 in 25 accesses
to the cache was resulting in a miss (forcing another item to be evicted).  Since this results
in synchronous disk write, performance is bottlenecked on IO.  Patching Derby to allow the
cache to grow to 200 entries solved the performance problem.
> Performance is severely degraded.  CPU utilization is low -- performance is IO bound.
 A sampling of stack dumps for the key thread consistently have the following methods at the
top of the stack:
> at sun/nio/ch/FileChannelImpl.force0(Native Method)
> at sun/nio/ch/FileChannelImpl.force(FileChannelImpl.java:392(Compiled Code))
> at org/apache/derby/impl/io/DirRandomAccessFile4.sync(Bytecode PC:5(Compiled Code))
> at org/apache/derby/impl/store/raw/data/RAFContainer.writeRAFHeader(Bytecode PC:86(Compiled
Code))
> at org/apache/derby/impl/store/raw/data/RAFContainer.clean(Bytecode PC:84(Compiled Code))
> at org/apache/derby/impl/services/cache/CachedItem.clean(Bytecode PC:7(Compiled Code))
> at org/apache/derby/impl/services/cache/Clock.rotateClock(Bytecode PC:7(Compiled Code))
> at org/apache/derby/impl/services/cache/Clock.findFreeItem(Bytecode PC:17(Compiled Code))
> at org/apache/derby/impl/services/cache/Clock.find(Bytecode PC:71(Compiled Code))
> at org/apache/derby/impl/store/raw/data/BaseDataFileFactory.openContainer(Bytecode PC:65(Compiled
Code))
> at org/apache/derby/impl/store/raw/data/BaseDataFileFactory.openContainer(Bytecode PC:7(Compiled
Code))
> at org/apache/derby/impl/store/raw/xact/Xact.openContainer(Bytecode PC:29(Compiled Code))

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message