Return-Path: Delivered-To: apmail-cassandra-user-archive@www.apache.org Received: (qmail 53474 invoked from network); 19 Jul 2010 18:52:53 -0000 Received: from unknown (HELO mail.apache.org) (140.211.11.3) by 140.211.11.9 with SMTP; 19 Jul 2010 18:52:53 -0000 Received: (qmail 70311 invoked by uid 500); 19 Jul 2010 18:52:52 -0000 Delivered-To: apmail-cassandra-user-archive@cassandra.apache.org Received: (qmail 70265 invoked by uid 500); 19 Jul 2010 18:52:51 -0000 Mailing-List: contact user-help@cassandra.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@cassandra.apache.org Delivered-To: mailing list user@cassandra.apache.org Received: (qmail 70257 invoked by uid 99); 19 Jul 2010 18:52:51 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 19 Jul 2010 18:52:51 +0000 X-ASF-Spam-Status: No, hits=2.9 required=10.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_NONE,SPF_NEUTRAL X-Spam-Check-By: apache.org Received-SPF: neutral (nike.apache.org: local policy) Received: from [209.85.213.172] (HELO mail-yx0-f172.google.com) (209.85.213.172) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 19 Jul 2010 18:52:44 +0000 Received: by yxj4 with SMTP id 4so1258140yxj.31 for ; Mon, 19 Jul 2010 11:52:22 -0700 (PDT) MIME-Version: 1.0 Received: by 10.224.20.4 with SMTP id d4mr3800542qab.139.1279565542499; Mon, 19 Jul 2010 11:52:22 -0700 (PDT) Received: by 10.229.41.149 with HTTP; Mon, 19 Jul 2010 11:52:22 -0700 (PDT) In-Reply-To: References: <4780D37D-47DE-4A28-8DE7-59B119555537@clearspring.com> <1279554146.314124203@192.168.2.227> <1279560166.856519322@192.168.2.227> <1376EF21-E210-4675-9518-62F4CC695F7D@gmail.com> Date: Mon, 19 Jul 2010 14:52:22 -0400 Message-ID: Subject: Re: Cassandra benchmarking on Rackspace Cloud From: malcolm smith To: user@cassandra.apache.org Content-Type: multipart/alternative; boundary=0015175cda2ca0a7e5048bc211e0 X-Virus-Checked: Checked by ClamAV on apache.org --0015175cda2ca0a7e5048bc211e0 Content-Type: text/plain; charset=ISO-8859-1 Usually a fixed bottleneck results from a limited resource -- you've eliminated disk from the test and you don't mention that CPU is a serious issue, or memory for that matter. So for me that leaves network i/o and switch capacity. Is it possible that your test is saturating your local network card or switch infrastructure. Some rough numbers would be that 1Gbe does about 120MBytes/second i/o in practice and 100Mbit will do something like 10MB/sec so if your requests so 37,000 requests per second would mean 270 bytes per request (including network encoding and meta data) on a 100Mbit network or 3.2K per request if you have a full 1Gbe network including switch capacity to switch 1Gbe per node. Is is possible that you are moving 3.2K per request? -malcolm On Mon, Jul 19, 2010 at 2:27 PM, Ryan King wrote: > On Mon, Jul 19, 2010 at 11:02 AM, David Schoonover > wrote: > >> Multiple client processes, or multiple client machines? > > > > > > I ran it with both one and two client machines making requests, and > ensured the sum of the request threads across the clients was 50. That was > on the cloud. I am re-running the multi-host test against the 4-node cluster > on dedicated hardware now to ensure that result was not an artifact of the > cloud. > > Why would you only use 50 threads total across two hosts? > > -ryan > --0015175cda2ca0a7e5048bc211e0 Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Us= ually a fixed bottleneck results from a limited resource -- you've elim= inated disk from the test and you don't mention that CPU is a serious i= ssue, or memory for that matter.

So for me that leaves network i/o and switch capacity. =A0Is= it possible that your test is saturating your local network card or switch= infrastructure. =A0

Some rough numbers would be t= hat 1Gbe does about 120MBytes/second i/o in practice and 100Mbit will do so= mething like 10MB/sec so if your requests so 37,000 requests per second wou= ld mean 270 bytes per request (including network encoding and meta data) on= a 100Mbit network or 3.2K per request if you have a full 1Gbe network incl= uding switch capacity to switch 1Gbe per node.

Is is possible that you are moving 3.2K per request?

-malcolm

On Mon, J= ul 19, 2010 at 2:27 PM, Ryan King <ryan@twitter.com> wrote:
On Mon, Jul 19, 2010 at 11:02 AM, David Sch= oonover
>> Multiple client processes, or multiple cli= ent machines?
>
>
> I ran it with both one and two client machines making requests, and en= sured the sum of the request threads across the clients was 50. That was on= the cloud. I am re-running the multi-host test against the 4-node cluster = on dedicated hardware now to ensure that result was not an artifact of the = cloud.

Why would you only use 50 threads total across two hosts?

-ryan

--0015175cda2ca0a7e5048bc211e0--