Return-Path: X-Original-To: apmail-cassandra-user-archive@www.apache.org Delivered-To: apmail-cassandra-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 0E334CDE7 for ; Wed, 19 Jun 2013 12:41:04 +0000 (UTC) Received: (qmail 85226 invoked by uid 500); 19 Jun 2013 12:41:01 -0000 Delivered-To: apmail-cassandra-user-archive@cassandra.apache.org Received: (qmail 85205 invoked by uid 500); 19 Jun 2013 12:41:00 -0000 Mailing-List: contact user-help@cassandra.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@cassandra.apache.org Delivered-To: mailing list user@cassandra.apache.org Received: (qmail 85187 invoked by uid 99); 19 Jun 2013 12:41:00 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 19 Jun 2013 12:41:00 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of shahab.yunus@gmail.com designates 209.85.214.49 as permitted sender) Received: from [209.85.214.49] (HELO mail-bk0-f49.google.com) (209.85.214.49) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 19 Jun 2013 12:40:54 +0000 Received: by mail-bk0-f49.google.com with SMTP id mz10so2320437bkb.22 for ; Wed, 19 Jun 2013 05:40:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=kJqldOXncQIm/Zg1EKp62TpuyfwaRpqG/2l2wdxDm8o=; b=zO9Gj0O+sQN3DPZsFFJi44LS7qoMTy6kCEEJc6dyUT4MjYtR8V/Nq+zZT4LOYCye/3 AZadWseGPRAKtRkEs27i8tKFCrE8pvgEWJpn3nmhogrDwxkrQsjRxnj8ulMQpXsU7QE7 CKtuJmstOcvoJuWiMKJQMOo4o+LLxHVCWVFec+ZjcbDDA99rWgTpGDPK7efvG2oazhnQ Wp8lM2l3JZ5TGtIvObrE8tcnpqna2WwQ7GvIYK4W7zzPkqKBYBLuhlPoZGhR2PFf0mwx fcM0ojHT74fIujmQLMuEmSKdMaUJ8AeZL2mzAUgqgN9GnrWT5NlsW62EqMAlwhmZOw8g qAwQ== MIME-Version: 1.0 X-Received: by 10.205.114.138 with SMTP id fa10mr361020bkc.30.1371645634246; Wed, 19 Jun 2013 05:40:34 -0700 (PDT) Received: by 10.204.188.71 with HTTP; Wed, 19 Jun 2013 05:40:34 -0700 (PDT) In-Reply-To: <7743523DB3AB4E1BB444AEBC78D9353A@CompudictedHP> References: <7743523DB3AB4E1BB444AEBC78D9353A@CompudictedHP> Date: Wed, 19 Jun 2013 08:40:34 -0400 Message-ID: Subject: Re: Dropped mutation messages From: Shahab Yunus To: user@cassandra.apache.org Content-Type: multipart/alternative; boundary=14dae9c0981ec9b8ee04df81225f X-Virus-Checked: Checked by ClamAV on apache.org --14dae9c0981ec9b8ee04df81225f Content-Type: text/plain; charset=ISO-8859-1 Hello Arthur, What do you mean by "The queries need to be lightened"? Thanks, Shahb On Tue, Jun 18, 2013 at 8:47 PM, Arthur Zubarev wrote: > Cem hi, > > as per http://wiki.apache.org/cassandra/FAQ#dropped_messages > > > Internode messages which are received by a node, but do not get not to be > processed within rpc_timeout are dropped rather than processed. As the > coordinator node will no longer be waiting for a response. If the > Coordinator node does not receive Consistency Level responses before the > rpc_timeout it will return a TimedOutException to the client. If the > coordinator receives Consistency Level responses it will return success to > the client. > > For MUTATION messages this means that the mutation was not applied to all > replicas it was sent to. The inconsistency will be repaired by Read Repair > or Anti Entropy Repair. > > For READ messages this means a read request may not have completed. > > Load shedding is part of the Cassandra architecture, if this is a > persistent issue it is generally a sign of an overloaded node or cluster. > > By the way, I am on C* 1.2.4 too in dev mode, after having my node filled > with 400 GB I started getting RPC timeouts on large data retrievals, so in > short, you may need to revise how you query. > > The queries need to be lightened > > /Arthur > > *From:* cem > *Sent:* Tuesday, June 18, 2013 1:12 PM > *To:* user@cassandra.apache.org > *Subject:* Dropped mutation messages > > Hi All, > > I have a cluster of 5 nodes with C* 1.2.4. > > Each node has 4 disks 1 TB each. > > I see a lot of dropped messages after it stores 400 GB per disk. (1.6 TB > per node). > > The recommendation was 500 GB max per node before 1.2. Datastax says that > we can store terabytes of data per node with 1.2. > http://www.datastax.com/docs/1.2/cluster_architecture/cluster_planning > > Do I need to enable anything to leverage from 1.2? Do you have any other > advice? > > What should be the path to investigate this? > > Thanks in advance! > > Best Regards, > Cem. > > > --14dae9c0981ec9b8ee04df81225f Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable
Hello Arthur,

What do you mean by= "The queries need = to be lightened"?

= Thanks,
Shahb


On Tue, Jun 18, 2013 at 8:47 PM, Arthur Zubarev <Arthur.Zubarev@aol.c= om> wrote:
Cem hi,
=A0
=A0

Internode messages which are received by a node, bu= t do not get not=20 to be processed within rpc_timeout are dropped rather than processed. As th= e=20 coordinator node will no longer be waiting for a response. If the Coordinat= or=20 node does not receive Consistency Level responses before the rpc_timeout it= will=20 return a TimedOutException to the client. If the coordinator receives=20 Consistency Level responses it will return success to the client.

For MUTATION messages this means that the mutation = was not applied to=20 all replicas it was sent to. The inconsistency will be repaired by Read Rep= air=20 or Anti Entropy Repair.

For READ messages this means a read request may not= have=20 completed.

Load shedding is part of the Cassandra architecture= , if this is a=20 persistent issue it is generally a sign of an overloaded node or=20 cluster.

By=20 the way, I am on C* 1.2.4 too in dev mode, after having my node filled with= 400=20 GB I started getting RPC timeouts on large data retrievals, so in short, yo= u may=20 need to revise how you query.

The=20 queries need to be lightened

/Arthur

=A0
From: cem
Sent: Tuesday, June 18, 2013 1:12 PM
To: user@cassandra.apache.org
Subject: Dropped mutation messages
=A0
Hi All,=20
=A0
I have a cluster of 5 nodes with C* 1.2.4.
=A0
Each node has 4 disks 1 TB each.
=A0
I see=A0 a lot of dropped messages after it stores 400 GB=A0 per=20 disk. (1.6 TB per node).
=A0
The recommendation was 500 GB max per node before 1.2.=A0 Datastax say= s=20 that we can store terabytes of data per node with 1.2.
=A0
Do I need to enable anything to leverage from 1.2? Do you have any oth= er=20 advice?
=A0
What should be the path to investigate this?
=A0
Thanks in advance!
=A0
Best Regards,
Cem.
=A0
=A0

--14dae9c0981ec9b8ee04df81225f--