Return-Path: Delivered-To: apmail-hadoop-zookeeper-user-archive@minotaur.apache.org Received: (qmail 37587 invoked from network); 10 Nov 2009 05:49:38 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (140.211.11.3) by minotaur.apache.org with SMTP; 10 Nov 2009 05:49:38 -0000 Received: (qmail 54934 invoked by uid 500); 10 Nov 2009 05:49:38 -0000 Delivered-To: apmail-hadoop-zookeeper-user-archive@hadoop.apache.org Received: (qmail 54894 invoked by uid 500); 10 Nov 2009 05:49:37 -0000 Mailing-List: contact zookeeper-user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: zookeeper-user@hadoop.apache.org Delivered-To: mailing list zookeeper-user@hadoop.apache.org Received: (qmail 54880 invoked by uid 99); 10 Nov 2009 05:49:37 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 10 Nov 2009 05:49:37 +0000 X-ASF-Spam-Status: No, hits=-2.6 required=5.0 tests=BAYES_00 X-Spam-Check-By: apache.org Received-SPF: neutral (athena.apache.org: local policy) Received: from [216.145.54.172] (HELO mrout2.yahoo.com) (216.145.54.172) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 10 Nov 2009 05:49:35 +0000 Received: from [10.72.244.120] (snvvpn1-10-72-244-c120.hq.corp.yahoo.com [10.72.244.120]) by mrout2.yahoo.com (8.13.6/8.13.6/y.out) with ESMTP id nAA5lFpH053629; Mon, 9 Nov 2009 21:47:16 -0800 (PST) Message-ID: <4AF8FE63.4060905@apache.org> Date: Mon, 09 Nov 2009 21:47:15 -0800 From: Patrick Hunt User-Agent: Thunderbird 2.0.0.23 (X11/20090817) MIME-Version: 1.0 To: zookeeper-user@hadoop.apache.org Subject: Re: ZK on EC2 References: <4AF8B9DF.50002@apache.org> In-Reply-To: Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit Interesting, so comparing a large (4cores and "high" i/o performance) ec2 instance (the first number on each line below) vs the host I used in the latency test (the second number on each line): ebs cache 817 vs 11532 ~ 7% (ec2 7% as performant) ebs bufread 53 vs 88 ~ 60% native cache 829 vs 11532 ~ 7% native bufread 80 vs 88 ~ 90% dd 512m 106s vs 74s ~ 43% longer for ec2 large md5sum 512m 2.13s vs 1.5 ~ 42% longer Good thing we don't rely on disk cache. ;-) Raw processing power looks about half. Could you test networking - scping data between hosts? (I was seeing 64.1MB/s for a 512mb file - the one created by dd, random data) Small anyone? Patrick Ted Dunning wrote: > /dev/sdp is an EBS volume. /dev/sdb is a native volume. > > This is a large instance. > > root@domU-#:~# hdparm -tT /dev/sdp > > /dev/sdp: > Timing cached reads: 1634 MB in 2.00 seconds = 817.30 MB/sec > Timing buffered disk reads: 160 MB in 3.00 seconds = 53.27 MB/sec > root@domU-:~# hdparm -tT /dev/sdb > > /dev/sdb: > Timing cached reads: 1658 MB in 2.00 seconds = 829.44 MB/sec > Timing buffered disk reads: 242 MB in 3.00 seconds = 80.56 MB/sec > root@domU-:~# time dd if=/dev/urandom bs=512000 of=/tmp/memtest > count=1050 > 1050+0 records in > 1050+0 records out > 537600000 bytes (538 MB) copied, 106.525 s, 5.0 MB/s > > real 1m46.517s > user 0m0.000s > sys 1m46.127s > root@domU-:~# time md5sum /tmp/memtest; time md5sum /tmp/memtest; > time md5sum /tmp/memtest > f79304f68ce04011ca0aebfbd548134a /tmp/memtest > > real 0m2.234s > user 0m1.613s > sys 0m0.590s > f79304f68ce04011ca0aebfbd548134a /tmp/memtest > > real 0m2.136s > user 0m1.560s > sys 0m0.584s > f79304f68ce04011ca0aebfbd548134a /tmp/memtest > > real 0m2.123s > user 0m1.640s > sys 0m0.481s > root@domU-:~# > > > On Mon, Nov 9, 2009 at 4:54 PM, Patrick Hunt wrote: > >> I'm really interested to know how ec2 compares wrt disk and network >> performance to what I've documented here under the "hardware" section: >> http://wiki.apache.org/hadoop/ZooKeeper/ServiceLatencyOverview#Hardware >> >> Is it possible for someone to compare the network and disk performance >> (scp, dd, md5sum, etc...) that I document in the wiki page on say, EC2 >> small/large nodes? I'd do it myself but I've not used ec2. If anyone could >> try these and report I'd appreciate it. >> >> Patrick >> >> >> Ted Dunning wrote: >> >>> Worked pretty well for me. We did extend all of our timeouts. The >>> biggest >>> worry for us was timeouts on the client side. The ZK server side was no >>> problem in that respect. >>> >>> On Mon, Nov 9, 2009 at 4:20 PM, Jun Rao wrote: >>> >>> Has anyone deployed ZK on EC2? What's the experience there? Are there >>>> more >>>> timeouts, lead re-election, etc? Thanks, >>>> >>>> Jun >>>> IBM Almaden Research Center >>>> K55/B1, 650 Harry Road, San Jose, CA 95120-6099 >>>> >>>> junrao@almaden.ibm.com >>>> >>> >>> >>> >>> > >