zookeeper-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Rakesh R (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (ZOOKEEPER-3180) Add response cache to improve the throughput of read heavy traffic
Date Mon, 26 Nov 2018 11:45:00 GMT

    [ https://issues.apache.org/jira/browse/ZOOKEEPER-3180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16698826#comment-16698826
] 

Rakesh R commented on ZOOKEEPER-3180:
-------------------------------------

[~lvfangmin], Feature looks pretty interesting and thanks for the patch. I will try to review
it when I get a chance.

I'd like to understand scaling of this cache in the production env. Appreciate if you could
help me, adding more details about the memory usage:
 # How much is the java heap size of ZooKeeper server and the possible number of znodes (am
assuming 4MB data per znode).
 # Based on your experiment, what would be the ideal number of elements to be kept into the
cache for a better read performance gain.

Since you have mentioned that the intention of read cache feature is to reduce the GC overhead,
I would also like to explore the option of enabling the cache backed by {{off heap}} memory.
Again, you have {{4MB}} data in a single zNode and it would occupy good amount of {{on heap}} memory
area if the number of zNode grows(in future). The off heap implementation uses {{DirectByteByffers}} to
manage cache outside of the JVM heap and provides scalability to a large memory sizes without
GC overhead.

One option could be to provide different cache modes (onheap or offheap) to the users to
efficiently utilize the heap area. Does this make sense to you?

> Add response cache to improve the throughput of read heavy traffic 
> -------------------------------------------------------------------
>
>                 Key: ZOOKEEPER-3180
>                 URL: https://issues.apache.org/jira/browse/ZOOKEEPER-3180
>             Project: ZooKeeper
>          Issue Type: Improvement
>          Components: server
>            Reporter: Fangmin Lv
>            Assignee: Brian Nixon
>            Priority: Minor
>              Labels: pull-request-available
>             Fix For: 3.6.0
>
>          Time Spent: 3h
>  Remaining Estimate: 0h
>
> On read heavy use case with large response data size, the serialization of response takes
time and added overhead to the GC.
> Add response cache helps improving the throughput we can support, which also reduces
the latency in general.
> This Jira is going to implement a LRU cache for the response, which shows some performance
gain on some of our production ensembles.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Mime
View raw message