Return-Path: Delivered-To: apmail-cassandra-user-archive@www.apache.org Received: (qmail 65016 invoked from network); 23 Apr 2010 19:30:16 -0000 Received: from unknown (HELO mail.apache.org) (140.211.11.3) by 140.211.11.9 with SMTP; 23 Apr 2010 19:30:16 -0000 Received: (qmail 45135 invoked by uid 500); 23 Apr 2010 19:30:15 -0000 Delivered-To: apmail-cassandra-user-archive@cassandra.apache.org Received: (qmail 45115 invoked by uid 500); 23 Apr 2010 19:30:15 -0000 Mailing-List: contact user-help@cassandra.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@cassandra.apache.org Delivered-To: mailing list user@cassandra.apache.org Received: (qmail 45105 invoked by uid 99); 23 Apr 2010 19:30:15 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 23 Apr 2010 19:30:15 +0000 X-ASF-Spam-Status: No, hits=2.7 required=10.0 tests=AWL,HTML_MESSAGE,RCVD_IN_DNSWL_NONE,SPF_NEUTRAL X-Spam-Check-By: apache.org Received-SPF: neutral (athena.apache.org: local policy) Received: from [74.125.82.44] (HELO mail-ww0-f44.google.com) (74.125.82.44) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 23 Apr 2010 19:30:08 +0000 Received: by wwb24 with SMTP id 24so2276278wwb.31 for ; Fri, 23 Apr 2010 12:29:47 -0700 (PDT) MIME-Version: 1.0 Received: by 10.216.85.140 with SMTP id u12mr722980wee.78.1272050987053; Fri, 23 Apr 2010 12:29:47 -0700 (PDT) Received: by 10.216.162.1 with HTTP; Fri, 23 Apr 2010 12:29:47 -0700 (PDT) Date: Fri, 23 Apr 2010 15:29:47 -0400 Message-ID: Subject: Re: New User: OSX vs. Debian on Cassandra 0.5.0 with Thrift From: Heath Oderman To: user@cassandra.apache.org Content-Type: multipart/alternative; boundary=0016e6db2ad63833750484ec7331 --0016e6db2ad63833750484ec7331 Content-Type: text/plain; charset=ISO-8859-1 Really interesting find. After Jonathan E. suggested py_stress and it seemed clear the problem was in my .net client I spent a few days debugging the client in detail. I ended up changing my CassandraContext instantiation to use a TBuffferedProtocol(TSocket) instead of a TSocket directly. The difference was *dramatic*. The calls to debian suddenly behaved as expected, eclipsing the write speeds under load of the calls to the OSX box by a factor of 2! The change caused a performance increase in the client communicating with OSX as well, but the improvement was smaller. I don't understand exactly, but clearly there's a difference in the way that Debian and OSX handle socket level communications that has a big effect on a .net client calling in from windows. It's been a really interesting experiment and I throughly appreciate all the help and pointers I've gotten from this list. Cassandra is so fast, and so impressive it strains credibility. I'm totally amazed by what these guys have put together. Thanks, Stu --0016e6db2ad63833750484ec7331 Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Really interesting find. =A0

After Jonathan E. suggested= py_stress and it seemed clear the problem was in my .net client I spent a = few days debugging the client in detail.

I ended u= p changing my CassandraContext instantiation to use a=A0

=A0=A0 =A0 =A0 =A0 =A0TBuffferedProtocol(TSocket) inste= ad of a=A0
=A0=A0 =A0 =A0 =A0 =A0TSocket directly.

=
The difference was *dramatic*. =A0

The = calls to debian suddenly behaved as expected, eclipsing the write speeds un= der load of the calls to the OSX box by a factor of 2! =A0

The change caused a performance increase in the client = communicating with OSX as well, but the improvement was smaller. =A0
<= div>
I don't understand exactly, but clearly there's = a difference in the way that Debian and OSX handle socket level communicati= ons that has a big effect on a .net client calling in from windows.

It's been a really interesting experiment and I thr= oughly appreciate all the help and pointers I've gotten from this list.= =A0

Cassandra is so fast, and so impressive it st= rains credibility. =A0I'm totally amazed by what these guys have put to= gether.

Thanks,
Stu
--0016e6db2ad63833750484ec7331--