Return-Path: Delivered-To: apmail-cassandra-user-archive@www.apache.org Received: (qmail 48626 invoked from network); 14 Oct 2010 20:24:29 -0000 Received: from unknown (HELO mail.apache.org) (140.211.11.3) by 140.211.11.9 with SMTP; 14 Oct 2010 20:24:29 -0000 Received: (qmail 24899 invoked by uid 500); 14 Oct 2010 20:24:27 -0000 Delivered-To: apmail-cassandra-user-archive@cassandra.apache.org Received: (qmail 24836 invoked by uid 500); 14 Oct 2010 20:24:27 -0000 Mailing-List: contact user-help@cassandra.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@cassandra.apache.org Delivered-To: mailing list user@cassandra.apache.org Received: (qmail 24828 invoked by uid 99); 14 Oct 2010 20:24:27 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 14 Oct 2010 20:24:27 +0000 X-ASF-Spam-Status: No, hits=3.3 required=10.0 tests=HTML_MESSAGE,SPF_PASS,TRACKER_ID X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: local policy) Received: from [64.95.66.66] (HELO mail.choicestream.com) (64.95.66.66) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 14 Oct 2010 20:24:19 +0000 Received: from MAIL.choicestream.com ([fe80::f09b:57df:196b:bd89]) by mail.choicestream.com ([fe80::f09b:57df:196b:bd89%12]) with mapi; Thu, 14 Oct 2010 16:23:37 -0400 From: Henry Luo To: "user@cassandra.apache.org" Date: Thu, 14 Oct 2010 16:23:35 -0400 Subject: Hundreds compaction a day, is it normal? Thread-Topic: Hundreds compaction a day, is it normal? Thread-Index: Actr3a3AfBfPAb9DQo+qMgemmKWC9g== Message-ID: <0D83BEFE4B70484AA8BDF69BB54C39067D739168@mail.choicestream.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-cr-hashedpuzzle: AKml A1vD CJFu C7EU DJvu D8mW Ecn6 GBuL GUdI Gq5q Gwt5 G9OW JFmc JUsf KEBD K+qJ;1;dQBzAGUAcgBAAGMAYQBzAHMAYQBuAGQAcgBhAC4AYQBwAGEAYwBoAGUALgBvAHIAZwA=;Sosha1_v1;7;{096E7744-81F5-4970-8890-4A0A15C73A92};aABsAHUAbwBAAGMAaABvAGkAYwBlAHMAdAByAGUAYQBtAC4AYwBvAG0A;Thu, 14 Oct 2010 20:23:35 GMT;SAB1AG4AZAByAGUAZABzACAAYwBvAG0AcABhAGMAdABpAG8AbgAgAGEAIABkAGEAeQAsACAAaQBzACAAaQB0ACAAbgBvAHIAbQBhAGwAPwA= x-cr-puzzleid: {096E7744-81F5-4970-8890-4A0A15C73A92} acceptlanguage: en-US Content-Type: multipart/alternative; boundary="_000_0D83BEFE4B70484AA8BDF69BB54C39067D739168mailchoicestrea_" MIME-Version: 1.0 X-Virus-Checked: Checked by ClamAV on apache.org --_000_0D83BEFE4B70484AA8BDF69BB54C39067D739168mailchoicestrea_ Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable We have a five node cluster, using replication factor of 3. The applicatio= n is only sending write requests at this point - we'd like to gain some ope= ration experience with it first before start read from it. We are seeing over a hundred compaction activities on each server, some of = them are for HintsColumnFamily. Each machine has 32 GB memory, two disk arrays, one with raid 0 for commit = log, one with raid 5 for data. We are using version 0.6.1, and pretty much = the out of the box storage.xml. Is this normal? Where should we look for tuning? Here is the ring info Address Status Load Range = Ring 103348149328693428942388257816272166= 328 10.100.10.68 Up 136.41 GB 621164569647680518437844336547211630= 92 |<--| 10.100.10.64 Up 136.31 GB 821051790518542696197993338289773725= 65 | ^ 10.100.10.66 Up 152.77 GB 921979532516275000703657552991746509= 36 v | 10.100.10.72 Up 71.38 GB 102264937228017528105060257264614100= 661 | ^ 10.100.10.76 Up 24.8 GB 103348149328693428942388257816272166= 328 |-->| Thanks. Henry ________________________________ The information transmitted is intended only for the person or entity to wh= ich it is addressed and may contain confidential, proprietary, and/or privi= leged material. Any review, retransmission, dissemination or other use of, = or taking of any action in reliance upon this information by persons or ent= ities other than the intended recipient is prohibited. If you received this= in error, please contact the sender and delete the material from all compu= ters. --_000_0D83BEFE4B70484AA8BDF69BB54C39067D739168mailchoicestrea_ Content-Type: text/html; charset="us-ascii" Content-Transfer-Encoding: quoted-printable

We have a five node cluster, using replication facto= r of 3.  The application is only sending write requests at this point = – we’d like to gain some operation experience with it first bef= ore start read from it.

 

We are seeing over a hundred compaction activities o= n each server, some of them are for HintsColumnFamily.

 

Each machine has 32 GB memory, two disk arrays, one = with raid 0 for commit log, one with raid 5 for data. We are using version = 0.6.1, and pretty much the out of the box storage.xml.

 

Is this normal? Where should we look for tuning?

 

Here is the ring info

 

Address       Status&n= bsp;    Load        =   Range          &nb= sp;            =             &nb= sp;  Ring

        &nbs= p;            &= nbsp;           &nbs= p;     103348149328693428942388257816272166328

10.100.10.68  Up     &= nbsp;   136.41 GB     62116456964768051843784= 433654721163092     |<--|

10.100.10.64  Up     &= nbsp;   136.31 GB     82105179051854269619799= 333828977372565     |   ^

10.100.10.66  Up     &= nbsp;   152.77 GB     92197953251627500070365= 755299174650936     v   |

10.100.10.72  Up     &= nbsp;   71.38 GB      102264937228017528= 105060257264614100661    |   ^

10.100.10.76  Up     &= nbsp;   24.8 GB       1033481493286= 93428942388257816272166328    |-->|

 

Thanks.

Henry

 



The information transmitted = is intended only for the person or entity to which it is addressed and may = contain confidential, proprietary, and/or privileged material. Any review, = retransmission, dissemination or other use of, or taking of any action in reliance upon this information by perso= ns or entities other than the intended recipient is prohibited. If you rece= ived this in error, please contact the sender and delete the material from = all computers.
--_000_0D83BEFE4B70484AA8BDF69BB54C39067D739168mailchoicestrea_--