Return-Path: Delivered-To: apmail-cassandra-user-archive@www.apache.org Received: (qmail 28444 invoked from network); 4 Dec 2010 01:28:09 -0000 Received: from unknown (HELO mail.apache.org) (140.211.11.3) by 140.211.11.9 with SMTP; 4 Dec 2010 01:28:09 -0000 Received: (qmail 9870 invoked by uid 500); 4 Dec 2010 01:28:07 -0000 Delivered-To: apmail-cassandra-user-archive@cassandra.apache.org Received: (qmail 9785 invoked by uid 500); 4 Dec 2010 01:28:07 -0000 Mailing-List: contact user-help@cassandra.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@cassandra.apache.org Delivered-To: mailing list user@cassandra.apache.org Received: (qmail 9777 invoked by uid 99); 4 Dec 2010 01:28:07 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Sat, 04 Dec 2010 01:28:07 +0000 X-ASF-Spam-Status: No, hits=0.0 required=10.0 tests=FREEMAIL_FROM,RCVD_IN_DNSWL_NONE,SPF_PASS,T_TO_NO_BRKTS_FREEMAIL X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of daniel.doubleday@gmx.net designates 213.165.64.23 as permitted sender) Received: from [213.165.64.23] (HELO mail.gmx.net) (213.165.64.23) by apache.org (qpsmtpd/0.29) with SMTP; Sat, 04 Dec 2010 01:27:59 +0000 Received: (qmail invoked by alias); 04 Dec 2010 01:27:37 -0000 Received: from 91-66-229-231-dynip.superkabel.de (EHLO BigBoo.local) [91.66.229.231] by mail.gmx.net (mp025) with SMTP; 04 Dec 2010 02:27:37 +0100 X-Authenticated: #3445653 X-Provags-ID: V01U2FsdGVkX19E9dcNPPdEmepfFAQbAq3KkKwPQtThWEAHkmstuM da2aW8Gm6nb1jD Message-ID: <4CF99908.3030408@gmx.net> Date: Sat, 04 Dec 2010 02:27:36 +0100 From: Daniel Doubleday User-Agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.6; en-US; rv:1.9.2.13) Gecko/20101129 Thunderbird/3.1.7 MIME-Version: 1.0 To: user@cassandra.apache.org Subject: Re: Dont bogart that connection my friend References: In-Reply-To: Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Y-GMX-Trusted: 0 Yes. I thought that would make sense, no? I guessed that the quorum read forces the slowest of the 3 nodes to keep the pace of the faster ones. But it cant. No matter how small the performance diff is. So it will just fill up. Also when saying 'practically dead' and 'never recovers' I meant for the time I kept the reads up. As soon as I stopped the scan it recovered. It just was not able to recover during the load because for that it would have to become faster that the other nodes and with full queues that just wouldn't happen. By changing the node for every read I would hit the slower node every couple of reads. This forced the client to wait for the slower node. I guess to change that behavior you would need to use something like dynamic snitch and ask only as many peer nodes as necessary to satisfy quorum and only ask other nodes when reads fail. But that would probably increase latency and cause whatever other problems. Since you probably don't want to run the cluster at a load at which the weakest node of a replication group can't keep up I don't think this is an issue at all. Just wanted to prevent others shooting their own foot as I did. On 03.12.10 23:36, Jonathan Ellis wrote: > Am I understanding correctly that you had all connections going to one > cassandra node, which caused one of the *other* nodes to die, and > spreading the connections around the cluster fixed it? > > On Fri, Dec 3, 2010 at 4:00 AM, Daniel Doubleday > wrote: >> Hi all >> >> I have found an anti pattern the other day which I wanted to share, although its pretty special case. >> >> Special case because our production cluster is somewhat strange: 3 servers, rf = 3. We do consistent reads/writes with quorum. >> >> I did a long running read series (loads of reads as fast as I can) with one connection. Since all queries could be handled by that node the overall latency is determined by its own and the fastest second node (cause the quorum is satisfied with 2 reads). What will happen than is that after a couple of minutes one of the other two nodes will go in 100% io wait and will drop most of its read messages. Leaving it practically dead while the other 2 nodes keep responding at an average of ~10ms. The node that died was only a little slower ~13ms average but it will inevitably queue up messages. Average response time increases to timeout (10 secs) flat. It never recovers. >> >> It happened all the time. And it wasn't the same node that would die. >> >> The solution was that I return the connection to the pool and get a new one for every read to balance the load on the client side. >> >> Obviously this will not happen in a cluster where the percentage of all rows on one node is enough. But the same thing will probably happen if you scan by continuos tokens (meaning that you will read from the same node a long time). >> >> Cheers, >> >> Daniel Doubleday >> smeet.com, Berlin > >