hadoop-common-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Apache Wiki <wikidi...@apache.org>
Subject [Hadoop Wiki] Update of "ZooKeeper/ServiceLatencyOverview" by PatrickHunt
Date Wed, 28 Oct 2009 16:32:42 GMT
Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Hadoop Wiki" for change notification.

The "ZooKeeper/ServiceLatencyOverview" page has been changed by PatrickHunt.
http://wiki.apache.org/hadoop/ZooKeeper/ServiceLatencyOverview?action=diff&rev1=27&rev2=28

--------------------------------------------------

    * user	0m1.284s
    * sys	0m0.214s
  
- During the tests the snapshot (on disk copy of the znode data) reached approximately 100meg
in size (as mentioned previously the JVM heap is limited to 512 meg), by default transactional
log files are pre-allocated to ~60meg. Additional snapshots and logs are written as needed.
Depending on your workload, the frequency of your snap/log cleanup, and your cluster configuration
you should allow sufficient storage space. For example running a 20 client test required approximately
5 gig of disk storage. (the tests were not cleaning up old snaps/logs, so this was accumulated
over the test lifetime).
- 
  === Operating System ===
  
  Linux version 2.6.18-53.1.13.el5 compiled using gcc version 4.1.2 20070626 (Red Hat 4.1.2-14))
@@ -110, +108 @@

  It would be interesting to test other workloads. Here I've weighted things more to a balanced
write/read workload. In production deployments we typically see a heavy read dominant workload.
In this case the service performance should be even better than what we are seeing here.
  
  Additionally it might be interesting to try tuning the ZK configuration parameters. For
example to examine the effects of leaderServes and forceSync options (to name just two) on
overall performance and latency.
+ 
+ During the tests the snapshot (on disk copy of the znode data) reached approximately 100meg
in size (as mentioned previously the JVM heap is limited to 512 meg), by default transactional
log files are pre-allocated to ~60meg. Additional snapshots and logs are written as needed.
Depending on your workload, the frequency of your snap/log cleanup, and your cluster configuration
you should allow sufficient storage space. For example running a 20 client test required approximately
5 gig of disk storage. (the tests were not cleaning up old snaps/logs, so this was accumulated
over the test lifetime).
  
  -----
  

Mime
View raw message