hadoop-common-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Apache Wiki <wikidi...@apache.org>
Subject [Hadoop Wiki] Update of "ZooKeeper/ServiceLatencyOverview" by PatrickHunt
Date Wed, 28 Oct 2009 16:22:08 GMT
Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Hadoop Wiki" for change notification.

The "ZooKeeper/ServiceLatencyOverview" page has been changed by PatrickHunt.
http://wiki.apache.org/hadoop/ZooKeeper/ServiceLatencyOverview?action=diff&rev1=25&rev2=26

--------------------------------------------------

  
  All systems were dual quad core Intel(R) Xeon(R) CPUs running at 2.50GHz. Eight cores were
available however as noted below Linux's processor affinity feature was used to limit CPU
availability to the JVM. In the tests below I use 1, 2, or 4 cores from the first CPU (taskset
0x01 or 0x03 or 0x0f respectively).
  
- All systems had 16gig memory available, however unless specifically noted the JVM's -Xmx
option was used to limit the size of the JVM heap to 512m.
+ All systems had 16gig ECC memory available, however unless specifically noted the JVM's
-Xmx option was used to limit the size of the JVM heap to 512m.
  
- All systems had 7200RPM SATA drives. hdparm reported:
+ All systems had 7200RPM SATA drives
+  * hdparm -tT reported:
-  * Timing cached reads:   22980 MB in  1.99 seconds = 11532.40 MB/sec
+   * Timing cached reads:   22980 MB in  1.99 seconds = 11532.40 MB/sec
-  * Timing buffered disk reads:  266 MB in  3.01 seconds =  88.29 MB/sec
+   * Timing buffered disk reads:  266 MB in  3.01 seconds =  88.29 MB/sec
+  * time dd if=/dev/urandom bs=512000 of=/tmp/memtest count=1050
+   * 537600000 bytes (538 MB) copied, 73.9991 seconds, 7.3 MB/s
+   * real	1m14.001s
+   * user	0m0.000s
+   * sys	1m13.995s
+  * time md5sum /tmp/memtest; time md5sum /tmp/memtest; time md5sum /tmp/memtest
+   * real	0m1.498s
+   * user	0m1.284s
+   * sys	0m0.214s
  
  During the tests the snapshot (on disk copy of the znode data) reached approximately 100meg
in size (as mentioned previously the JVM heap is limited to 512 meg), by default transactional
log files are pre-allocated to ~60meg. Additional snapshots and logs are written as needed.
Depending on your workload, the frequency of your snap/log cleanup, and your cluster configuration
you should allow sufficient storage space. For example running a 20 client test required approximately
5 gig of disk storage. (the tests were not cleaning up old snaps/logs, so this was accumulated
over the test lifetime).
  

Mime
View raw message