zookeeper-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Michael Han (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (ZOOKEEPER-3180) Add response cache to improve the throughput of read heavy traffic
Date Thu, 13 Dec 2018 04:57:00 GMT

    [ https://issues.apache.org/jira/browse/ZOOKEEPER-3180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16719752#comment-16719752
] 

Michael Han commented on ZOOKEEPER-3180:
----------------------------------------

My experience with JVM GC and ZooKeeper is GC is rarely a real issue in production if tuned
correctly (I ran fairly large ZK fleet which kind push ZK to its limit). Most GC issue I had
is software bugs - such as leaking connections. For this cache case, the current implementation
is good enough for my use case, though I do have interests on off heap solutions as well.
My concern around off heap solution is it's probably going to be more complicated, and has
overhead of serialization / deserialization between heap / off heap. I'd say we get this patch
landed, have more people tested it out, then improve it with more options.

 

And for caching in general, obviously it depends a lot on workload and actual use case, so
it's kind hard to provide a cache solution that works for everyone in first place...

> Add response cache to improve the throughput of read heavy traffic 
> -------------------------------------------------------------------
>
>                 Key: ZOOKEEPER-3180
>                 URL: https://issues.apache.org/jira/browse/ZOOKEEPER-3180
>             Project: ZooKeeper
>          Issue Type: Improvement
>          Components: server
>            Reporter: Fangmin Lv
>            Assignee: Brian Nixon
>            Priority: Minor
>              Labels: pull-request-available
>             Fix For: 3.6.0
>
>          Time Spent: 3h 50m
>  Remaining Estimate: 0h
>
> On read heavy use case with large response data size, the serialization of response takes
time and added overhead to the GC.
> Add response cache helps improving the throughput we can support, which also reduces
the latency in general.
> This Jira is going to implement a LRU cache for the response, which shows some performance
gain on some of our production ensembles.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Mime
View raw message