Return-Path: X-Original-To: apmail-cassandra-user-archive@www.apache.org Delivered-To: apmail-cassandra-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id EFA25F9AF for ; Tue, 23 Apr 2013 20:00:32 +0000 (UTC) Received: (qmail 94668 invoked by uid 500); 23 Apr 2013 20:00:30 -0000 Delivered-To: apmail-cassandra-user-archive@cassandra.apache.org Received: (qmail 94580 invoked by uid 500); 23 Apr 2013 20:00:30 -0000 Mailing-List: contact user-help@cassandra.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@cassandra.apache.org Delivered-To: mailing list user@cassandra.apache.org Received: (qmail 94571 invoked by uid 99); 23 Apr 2013 20:00:30 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 23 Apr 2013 20:00:30 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of jaytechgeek@gmail.com designates 209.85.217.180 as permitted sender) Received: from [209.85.217.180] (HELO mail-lb0-f180.google.com) (209.85.217.180) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 23 Apr 2013 20:00:23 +0000 Received: by mail-lb0-f180.google.com with SMTP id t11so1021761lbi.39 for ; Tue, 23 Apr 2013 13:00:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:x-received:in-reply-to:references:date:message-id :subject:from:to:content-type; bh=AdzdyNRxkl3VDyLDN41rAPiulwWguXR2HMalfNGhdZE=; b=NQ5pbHdb9J4R+e9YjsERvU6b/ARu1ooDVtUVdVlLBU3dx2wErWiLuTSdVazAN/6bw6 kmI4PxKaLf5vjsHcIulCt9kPMjkhW7aFvtenSBWrADBywlCY5iKcxcvwCAx6n0Jg7zeA CygYiKk8ZraYHzBZEhleQFTHZcYs+J/Lu9ZnemzNL/VxYx8KXyJDA+ylMtLRM7+OJaDD JZlOyVUPuESBDO/+UvDJ94Y7pzb6bb+r2DE2kieQJ3b1F4gwHzFCcpqB5jc/vbaaem+0 8cCi/MB6DJHhKCBHGkShLCdmIKkwB5D6HfwHd3LSyjzpFcdRf/ZOiCEkkz3y1mPATece wgrA== MIME-Version: 1.0 X-Received: by 10.112.154.98 with SMTP id vn2mr14682838lbb.8.1366747203199; Tue, 23 Apr 2013 13:00:03 -0700 (PDT) Received: by 10.152.6.132 with HTTP; Tue, 23 Apr 2013 13:00:02 -0700 (PDT) In-Reply-To: <895140D7-3030-4BCC-A10E-D3671D104EA4@thelastpickle.com> References: <29DE604D-4833-4C91-9E85-54A291BD5F2D@thelastpickle.com> <895140D7-3030-4BCC-A10E-D3671D104EA4@thelastpickle.com> Date: Tue, 23 Apr 2013 15:00:02 -0500 Message-ID: Subject: Re: How to make compaction run faster? From: Jay Svc To: user@cassandra.apache.org Content-Type: multipart/alternative; boundary=089e01182fe48b69ee04db0ca14f X-Virus-Checked: Checked by ClamAV on apache.org --089e01182fe48b69ee04db0ca14f Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: quoted-printable Thanks Aaron, The parameters I tried above are set one at a time, based on what I observed, the problem at the core is that "can compaction catch up with write speed". I have gone up to 30,000 to 35000 writes per second. I do not see number of writes a much issue either. I see compaction is not catching up with a write speed is an issue, in spite of I have more CPU and memory. Because over the period of time, we will see growing number of pending compactions, as write continues, that will degrade my read performance. Do you think STCS is compaction strategy to speed up compaction? What is a good approach when we have greater number of reads and writes, so compaction catch up with the write speed? Thank you in advance. Jay On Sun, Apr 21, 2013 at 1:43 PM, aaron morton wrot= e: > You are suggesting to go back to STCS and increase the > compaction_throughput step by step to see if compaction catch up with wri= te > traffic? > > As a personal approach, when so many config settings are changed it > becomes impossible to understand cause and effect. So I try to return to = a > known base line and then make changes. > > As I watched Disk latency on DSE Opscenter as well as on iostat the await > is always 35 to 40 ms for longer period of time during the test. > > You previously said this was the await on the commit log. > What is the queue size ? > > The problem sounds like IO is not keeping up, moving to STS will reduce > the IO. Levelled Compaction is designed to reduce the number os SSTables = in > a read, not to do compaction faster. > > At some point you may be writing too fast for the nodes. I'm not sure if > you have discussed the level of writes going through the system. Get > something that works and then make one change at a time until it does not= . > You should then be able to say "The system can handle X writes of Y size > per second, but after that compaction cannot keep up." > > Cheers > > ----------------- > Aaron Morton > Freelance Cassandra Consultant > New Zealand > > @aaronmorton > http://www.thelastpickle.com > > On 19/04/2013, at 7:16 AM, Jay Svc wrote: > > Thanks Aaron, > > Please find answers to your questions. > > 1. I started test with default parameters the compaction is backing up. S= o > went for various options. > 2. The data is on RAID10. > 3. As I watched Disk latency on DSE Opscenter as well as on iostat the > await is always 35 to 40 ms for longer period of time during the test. > (which probably gives me high write latency on client side) Do you think > this could contribute to slowing down the compaction? probably not..! > > So Aaron, I am trying to understand - > You are suggesting to go back to STCS and increase the > compaction_throughput step by step to see if compaction catch up with wri= te > traffic? > > Thank you for your inputs. > > Regards, > Jay > > > On Thu, Apr 18, 2013 at 1:52 PM, aaron morton wr= ote: > >> > Parameters used: >> > =95 SSTable size: 500MB (tried various sizes from 20MB to 1GB) >> > =95 Compaction throughput mb per sec: 250MB (tried from 16MB to >> 640MB) >> > =95 Concurrent write: 196 (tried from 32 to 296) >> > =95 Concurrent compactors: 72 (tried disabling to making it 172) >> > =95 Multithreaded compaction: true (tried both true and false) >> > =95 Compaction strategy: LCS (tried STCS as well) >> > =95 Memtable total space in mb: 4096 MB (tried default and some >> other params too) >> I would restore to default settings before I did anything else. >> >> > Aaron, Please find the iostat below: the sdb and dm-2 are the commitlo= g >> disks. >> > Please find the iostat of some of 3 different boxes in my cluster. >> >> What is the data on ? >> It's important to call iostat with a period and watch the await / queue >> size of time. Not just view a snapshot. >> >> I would go back to STS with default settings, and ramp up the throughput >> until compaction cannot keep up. Then increase the throughout and see ho= w >> that works. Then increase throughput again and see what happens. >> >> Cheers >> >> >> ----------------- >> Aaron Morton >> Freelance Cassandra Consultant >> New Zealand >> >> @aaronmorton >> http://www.thelastpickle.com >> >> On 19/04/2013, at 5:05 AM, Jay Svc wrote: >> >> > Hi Aaron, Alexis, >> > >> > Thanks for reply, Please find some more details below. >> > >> > Core problems: Compaction is taking longer time to finish. So it will >> affect my reads. I have more CPU and memory, want to utilize that to spe= ed >> up the compaction process. >> > Parameters used: >> > =95 SSTable size: 500MB (tried various sizes from 20MB to 1GB) >> > =95 Compaction throughput mb per sec: 250MB (tried from 16MB to >> 640MB) >> > =95 Concurrent write: 196 (tried from 32 to 296) >> > =95 Concurrent compactors: 72 (tried disabling to making it 172) >> > =95 Multithreaded compaction: true (tried both true and false) >> > =95 Compaction strategy: LCS (tried STCS as well) >> > =95 Memtable total space in mb: 4096 MB (tried default and some >> other params too) >> > Note: I have tried almost all permutation combination of these >> parameters. >> > Observations: >> > I ran test for 1.15 hrs with writes at the rate of 21000 >> records/sec(total 60GB data during 1.15 hrs). After I stopped the test >> > compaction took additional 1.30 hrs to finish compaction, that reduced >> the SSTable count from 170 to 17. >> > CPU(24 cores): almost 80% idle during the run >> > JVM: 48G RAM, 8G Heap, (3G to 5G heap used) >> > Pending Writes: sometimes high spikes for small amount of time >> otherwise pretty flat >> > Aaron, Please find the iostat below: the sdb and dm-2 are the commitlo= g >> disks. >> > Please find the iostat of some of 3 different boxes in my cluster. >> > -bash-4.1$ iostat -xkcd >> > Linux 2.6.32-358.2.1.el6.x86_64 (edc-epod014-dl380-3) 04/18/2013 >> _x86_64_ (24 CPU) >> > avg-cpu: %user %nice %system %iowait %steal %idle >> > 1.20 1.11 0.59 0.01 0.00 97.09 >> > Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await svct= m >> %util >> > sda 0.03 416.56 9.00 7.08 1142.49 1694.55 352.88 0.07 4.08 0.57 0.92 >> > sdb 0.00 172.38 0.08 3.34 10.76 702.89 416.96 0.09 24.84 0.94 0.32 >> > dm-0 0.00 0.00 0.03 0.75 0.62 3.00 9.24 0.00 1.45 0.33 0.03 >> > dm-1 0.00 0.00 0.00 0.00 0.00 0.00 8.00 0.00 0.74 0.68 0.00 >> > dm-2 0.00 0.00 0.08 175.72 10.76 702.89 8.12 3.26 18.49 0.02 0.32 >> > dm-3 0.00 0.00 0.00 0.00 0.00 0.00 7.97 0.00 0.83 0.62 0.00 >> > dm-4 0.00 0.00 8.99 422.89 1141.87 1691.55 13.12 4.64 10.71 0.02 0.90 >> > -bash-4.1$ iostat -xkcd >> > Linux 2.6.32-358.2.1.el6.x86_64 (ndc-epod014-dl380-1) 04/18/2013 >> _x86_64_ (24 CPU) >> > avg-cpu: %user %nice %system %iowait %steal %idle >> > 1.20 1.12 0.52 0.01 0.00 97.14 >> > Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await svc >> > sda 0.01 421.17 9.22 7.43 1167.81 1714.38 346.10 0.07 3.99 0. >> > sdb 0.00 172.68 0.08 3.26 10.52 703.74 427.79 0.08 25.01 0. >> > dm-0 0.00 0.00 0.04 1.04 0.89 4.16 9.34 0.00 2.58 0. >> > dm-1 0.00 0.00 0.00 0.00 0.00 0.00 8.00 0.00 0.77 0. >> > dm-2 0.00 0.00 0.08 175.93 10.52 703.74 8.12 3.13 17.78 0. >> > dm-3 0.00 0.00 0.00 0.00 0.00 0.00 7.97 0.00 1.14 0. >> > dm-4 0.00 0.00 9.19 427.55 1166.91 1710.21 13.18 4.67 10.65 0. >> > -bash-4.1$ iostat -xkcd >> > Linux 2.6.32-358.2.1.el6.x86_64 (edc-epod014-dl380-1) 04/18/2013 >> _x86_64_ (24 CPU) >> > avg-cpu: %user %nice %system %iowait %steal %idle >> > 1.15 1.13 0.52 0.01 0.00 97.19 >> > Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await svct= m >> %util >> > sda 0.02 429.97 9.28 7.29 1176.81 1749.00 353.12 0.07 4.10 0.55 0.91 >> > sdb 0.00 173.65 0.08 3.09 10.50 706.96 452.25 0.09 27.23 0.99 0.31 >> > dm-0 0.00 0.00 0.04 0.79 0.82 3.16 9.61 0.00 1.54 0.27 0.02 >> > dm-1 0.00 0.00 0.00 0.00 0.00 0.00 8.00 0.00 0.68 0.63 0.00 >> > dm-2 0.00 0.00 0.08 176.74 10.50 706.96 8.12 3.46 19.53 0.02 0.31 >> > dm-3 0.00 0.00 0.00 0.00 0.00 0.00 7.97 0.00 0.85 0.83 0.00 >> > dm-4 0.00 0.00 9.26 436.46 1175.98 1745.84 13.11 0.03 0.03 0.02 0.89 >> > Thanks, >> > Jay >> > >> > >> > On Thu, Apr 18, 2013 at 2:50 AM, aaron morton >> wrote: >> > > I believe that compaction occurs on the data directories and not in >> the commitlog. >> > Yes, compaction only works on the data files. >> > >> > > When I ran iostat; I see "await" 26ms to 30 ms for my commit log >> disk. My CPU is less than 18% used. >> > > >> > > How I reduce the disk latency for my commit log disk. They are SSDs. >> > That does not sound right. Can you include the output from iostat for >> the commit log and data volumes. Also some information on how many write= s >> you are processing the the size of rows as well. >> > >> > Cheers >> > >> > ----------------- >> > Aaron Morton >> > Freelance Cassandra Consultant >> > New Zealand >> > >> > @aaronmorton >> > http://www.thelastpickle.com >> > >> > On 18/04/2013, at 11:58 AM, Alexis Rodr=EDguez < >> arodriguez@inconcertcc.com> wrote: >> > >> > > Jay, >> > > >> > > I believe that compaction occurs on the data directories and not in >> the commitlog. >> > > >> > > http://wiki.apache.org/cassandra/MemtableSSTable >> > > >> > > >> > > >> > > >> > > On Wed, Apr 17, 2013 at 7:58 PM, Jay Svc >> wrote: >> > > Hi Alexis, >> > > >> > > Thank you for your response. >> > > >> > > My commit log is on SSD. which shows me 30 to 40 ms of disk latency. >> > > >> > > When I ran iostat; I see "await" 26ms to 30 ms for my commit log >> disk. My CPU is less than 18% used. >> > > >> > > How I reduce the disk latency for my commit log disk. They are SSDs. >> > > >> > > Thank you in advance, >> > > Jay >> > > >> > > >> > > On Wed, Apr 17, 2013 at 3:58 PM, Alexis Rodr=EDguez < >> arodriguez@inconcertcc.com> wrote: >> > > :D >> > > >> > > Jay, check if your disk(s) utilization allows you to change the >> configuration the way Edward suggest. iostat -xkcd 1 will show you how m= uch >> of your disk(s) are in use. >> > > >> > > >> > > >> > > >> > > On Wed, Apr 17, 2013 at 5:26 PM, Edward Capriolo < >> edlinuxguru@gmail.com> wrote: >> > > three things: >> > > 1) compaction throughput is fairly low (yaml nodetool) >> > > 2) concurrent compactions is fairly low (yaml) >> > > 3) multithreaded compaction might be off in your version >> > > >> > > Try raising these things. Otherwise consider option 4. >> > > >> > > 4)$$$$$$$$$$$$$$$$$$$$$$$ RAID,RAM> > > >> > > >> > > On Wed, Apr 17, 2013 at 4:01 PM, Jay Svc >> wrote: >> > > Hi Team, >> > > >> > > >> > > I have a high write traffic to my Cassandra cluster. I experience a >> very high number of pending compactions. As I expect higher writes, The >> pending compactions keep increasing. Even when I stop my writes it takes >> several hours to finishing pending compactions. >> > > >> > > My CF is configured with LCS, with sstable_size_mb=3D20M. My CPU is >> below 20%, JVM memory usage is between 45%-55%. I am using Cassandra 1.1= .9. >> > > >> > > How can I increase the compaction rate so it will run bit faster to >> match my write speed? >> > > >> > > Your inputs are appreciated. >> > > >> > > Thanks, >> > > Jay >> > > >> > > >> > > >> > > >> > > >> > >> > >> >> > > --089e01182fe48b69ee04db0ca14f Content-Type: text/html; charset=windows-1252 Content-Transfer-Encoding: quoted-printable
Thanks Aaron,

The parameters I tr= ied above are set one at a time, based on what I observed, the problem at t= he core is that "can compaction catch up with write speed".

I have =A0gone up to 30,000 to 35000 writes= per second. I do not see number of writes a much issue either. I see compa= ction is not catching up with a write speed is an issue, in spite of I have= more CPU and memory. Because over the period of time, we will see growing = number of pending compactions, as write continues, that will degrade my rea= d performance.

Do you think STCS is compaction strategy to= speed up compaction?=A0What is a good approach when we have greater number= of reads and writes, so compaction catch up with the write speed?=A0

Thank you in advance.
J= ay




On Sun, Apr 21, 2013 at 1:43 PM,= aaron morton <aaron@thelastpickle.com> wrote:
You are suggestin= g to go back to STCS and increase the compaction_throughput step by step to= see if compaction catch up with write traffic?=A0
As a personal approach= , when so many config settings are changed it becomes impossible to underst= and cause and effect. So I try to return to a known base line and then make= changes.=A0

As I watched Disk latency on DSE Opscenter as well as on= iostat the await is always 35 to 40 ms for longer period of time during th= e test.
You previously said this was the await on the commit log= .=A0
What is the queue size ?=A0

The problem s= ounds like IO is not keeping up, moving to STS will reduce the IO. Levelled= Compaction is designed to reduce the number os SSTables in a read, not to = do compaction faster.=A0

At some point you may be writing too fast for the nodes= . I'm not sure if you have discussed the level of writes going through = the system. Get something that works and then make one change at a time unt= il it does not. You should then be able to say "The system can handle = X writes of Y size per second, but after that compaction cannot keep up.&qu= ot;

Cheers
=A0
-----------------
Aaron Morton
Freelance Cassandra= Consultant
New Zealand


On 19/04/2013, at 7:16 AM, Jay S= vc <jaytechge= ek@gmail.com> wrote:

Thanks Aaron,

Please find answers to your questions.

1. I started test with default parameters the compact= ion is backing up. So went for various options.
2. The data is on RAID10.
3. As I watched Disk latency on DS= E Opscenter as well as on iostat the await is always 35 to 40 ms for longer= period of time during the test. (which probably gives me high write latenc= y on client side) Do you think this could contribute to slowing down the co= mpaction? probably not..!=A0

So Aaron, I am trying to understand -
You are= suggesting to go back to STCS and increase the compaction_throughput step = by step to see if compaction catch up with write traffic?=A0

Thank you for your inputs.

Regards,
Jay


On Thu, Apr 18, 2013 at 1:52 PM, aaron morton <aaron@thelastpickle.com> wrote:
> Parameters used:
> =A0 =A0 =A0 =95 SSTable size: 500MB (tried various sizes from 20MB to = 1GB)
> =A0 =A0 =A0 =95 Compaction throughput mb per sec: 250MB (tried from 16= MB to 640MB)
> =A0 =A0 =A0 =95 Concurrent write: 196 (tried from 32 to 296)
> =A0 =A0 =A0 =95 Concurrent compactors: 72 (tried disabling to making i= t 172)
> =A0 =A0 =A0 =95 Multithreaded compaction: true (tried both true and fa= lse)
> =A0 =A0 =A0 =95 Compaction strategy: LCS (tried STCS as well)
> =A0 =A0 =A0 =95 Memtable total space in mb: 4096 MB (tried default and= some other params too)
I would restore to default settings before I did anything else.

> Aaron, Please find the iostat below: the sdb and dm-2 are the commitlo= g disks.
> Please find the iostat of some of 3 different boxes in my cluster.

What is the data on ?
It's important to call iostat with a period and watch the await / queue= size of time. Not just view a snapshot.

I would go back to STS with default settings, and ramp up the throughput un= til compaction cannot keep up. Then increase the throughout and see how tha= t works. Then increase throughput again and see what happens.

Cheers


-----------------
Aaron Morton
Freelance Cassandra Consultant
New Zealand

@aaronmorton
http://www.thel= astpickle.com

On 19/04/2013, at 5:05 AM, Jay Svc <jaytechgeek@gmail.com> wrote:=

> Hi Aaron, Alexis,
>
> Thanks for reply, Please find some more details below.
>
> Core problems: Compaction is taking longer time to finish. So it will = affect my reads. I have more CPU and memory, want to utilize that to speed = up the compaction process.
> Parameters used:
> =A0 =A0 =A0 =95 SSTable size: 500MB (tried various sizes from 20MB to = 1GB)
> =A0 =A0 =A0 =95 Compaction throughput mb per sec: 250MB (tried from 16= MB to 640MB)
> =A0 =A0 =A0 =95 Concurrent write: 196 (tried from 32 to 296)
> =A0 =A0 =A0 =95 Concurrent compactors: 72 (tried disabling to making i= t 172)
> =A0 =A0 =A0 =95 Multithreaded compaction: true (tried both true and fa= lse)
> =A0 =A0 =A0 =95 Compaction strategy: LCS (tried STCS as well)
> =A0 =A0 =A0 =95 Memtable total space in mb: 4096 MB (tried default and= some other params too)
> Note: I have tried almost all permutation combination of these paramet= ers.
> Observations:
> I ran test for 1.15 hrs with writes at the rate of 21000 records/sec(t= otal 60GB data during 1.15 hrs). After I stopped the test
> compaction took additional 1.30 hrs to finish compaction, that reduced= the SSTable count from 170 to 17.
> CPU(24 cores): almost 80% idle during the run
> JVM: 48G RAM, 8G Heap, (3G to 5G heap used)
> Pending Writes: sometimes high spikes for small amount of time otherwi= se pretty flat
> Aaron, Please find the iostat below: the sdb and dm-2 are the commitlo= g disks.
> Please find the iostat of some of 3 different boxes in my cluster.
> -bash-4.1$ iostat -xkcd
> Linux 2.6.32-358.2.1.el6.x86_64 (edc-epod014-dl380-3) 04/18/2013 _x86_= 64_ (24 CPU)
> avg-cpu: %user %nice %system %iowait %steal %idle
> 1.20 1.11 0.59 0.01 0.00 97.09
> Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await svct= m %util
> sda 0.03 416.56 9.00 7.08 1142.49 1694.55 352.88 0.07 4.08 0.57 0.92 > sdb 0.00 172.38 0.08 3.34 10.76 702.89 416.96 0.09 24.84 0.94 0.32
> dm-0 0.00 0.00 0.03 0.75 0.62 3.00 9.24 0.00 1.45 0.33 0.03
> dm-1 0.00 0.00 0.00 0.00 0.00 0.00 8.00 0.00 0.74 0.68 0.00
> dm-2 0.00 0.00 0.08 175.72 10.76 702.89 8.12 3.26 18.49 0.02 0.32
> dm-3 0.00 0.00 0.00 0.00 0.00 0.00 7.97 0.00 0.83 0.62 0.00
> dm-4 0.00 0.00 8.99 422.89 1141.87 1691.55 13.12 4.64 10.71 0.02 0.90<= br> > -bash-4.1$ iostat -xkcd
> Linux 2.6.32-358.2.1.el6.x86_64 (ndc-epod014-dl380-1) 04/18/2013 _x86_= 64_ (24 CPU)
> avg-cpu: %user %nice %system %iowait %steal %idle
> 1.20 1.12 0.52 0.01 0.00 97.14
> Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await svc<= br> > sda 0.01 421.17 9.22 7.43 1167.81 1714.38 346.10 0.07 3.99 0.
> sdb 0.00 172.68 0.08 3.26 10.52 703.74 427.79 0.08 25.01 0.
> dm-0 0.00 0.00 0.04 1.04 0.89 4.16 9.34 0.00 2.58 0.
> dm-1 0.00 0.00 0.00 0.00 0.00 0.00 8.00 0.00 0.77 0.
> dm-2 0.00 0.00 0.08 175.93 10.52 703.74 8.12 3.13 17.78 0.
> dm-3 0.00 0.00 0.00 0.00 0.00 0.00 7.97 0.00 1.14 0.
> dm-4 0.00 0.00 9.19 427.55 1166.91 1710.21 13.18 4.67 10.65 0.
> -bash-4.1$ iostat -xkcd
> Linux 2.6.32-358.2.1.el6.x86_64 (edc-epod014-dl380-1) 04/18/2013 _x86_= 64_ (24 CPU)
> avg-cpu: %user %nice %system %iowait %steal %idle
> 1.15 1.13 0.52 0.01 0.00 97.19
> Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await svct= m %util
> sda 0.02 429.97 9.28 7.29 1176.81 1749.00 353.12 0.07 4.10 0.55 0.91 > sdb 0.00 173.65 0.08 3.09 10.50 706.96 452.25 0.09 27.23 0.99 0.31
> dm-0 0.00 0.00 0.04 0.79 0.82 3.16 9.61 0.00 1.54 0.27 0.02
> dm-1 0.00 0.00 0.00 0.00 0.00 0.00 8.00 0.00 0.68 0.63 0.00
> dm-2 0.00 0.00 0.08 176.74 10.50 706.96 8.12 3.46 19.53 0.02 0.31
> dm-3 0.00 0.00 0.00 0.00 0.00 0.00 7.97 0.00 0.85 0.83 0.00
> dm-4 0.00 0.00 9.26 436.46 1175.98 1745.84 13.11 0.03 0.03 0.02 0.89 > Thanks,
> Jay
>
>
> On Thu, Apr 18, 2013 at 2:50 AM, aaron morton <aaron@thelastpickle.com> wr= ote:
> > I believe that compaction occurs on the data directories and not = in the commitlog.
> Yes, compaction only works on the data files.
>
> > When I ran iostat; I see "await" 26ms to 30 ms for my c= ommit log disk. My CPU is less than 18% used.
> >
> > How I reduce the disk latency for my commit log disk. They are SS= Ds.
> That does not sound right. Can you include the output from iostat for = the commit log and data volumes. Also some information on how many writes y= ou are processing the the size of rows as well.
>
> Cheers
>
> -----------------
> Aaron Morton
> Freelance Cassandra Consultant
> New Zealand
>
> @aaronmorton
> http://www= .thelastpickle.com
>
> On 18/04/2013, at 11:58 AM, Alexis Rodr=EDguez <arodriguez@inconcertcc.com= > wrote:
>
> > Jay,
> >
> > I believe that compaction occurs on the data directories and not = in the commitlog.
> >
> > http://wiki.apache.org/cassandra/MemtableSSTable
> >
> >
> >
> >
> > On Wed, Apr 17, 2013 at 7:58 PM, Jay Svc <jaytechgeek@gmail.com> wrote:=
> > Hi Alexis,
> >
> > Thank you for your response.
> >
> > My commit log is on SSD. which shows me 30 to 40 ms of disk laten= cy.
> >
> > When I ran iostat; I see "await" 26ms to 30 ms for my c= ommit log disk. My CPU is less than 18% used.
> >
> > How I reduce the disk latency for my commit log disk. They are SS= Ds.
> >
> > Thank you in advance,
> > Jay
> >
> >
> > On Wed, Apr 17, 2013 at 3:58 PM, Alexis Rodr=EDguez <arodriguez@inconce= rtcc.com> wrote:
> > :D
> >
> > Jay, check if your disk(s) utilization allows you to change the c= onfiguration the way Edward suggest. iostat -xkcd 1 will show you how much = of your disk(s) are in use.
> >
> >
> >
> >
> > On Wed, Apr 17, 2013 at 5:26 PM, Edward Capriolo <edlinuxguru@gmail.com>= ; wrote:
> > three things:
> > 1) compaction throughput is fairly low (yaml nodetool)
> > 2) concurrent compactions is fairly low (yaml)
> > 3) multithreaded compaction might be off in your version
> >
> > Try raising these things. Otherwise consider option 4.
> >
> > 4)$$$$$$$$$$$$$$$$$$$$$$$ RAID,RAM<CPU$$$$$$$$$$$$$$
> >
> >
> > On Wed, Apr 17, 2013 at 4:01 PM, Jay Svc <jaytechgeek@gmail.com> wrote:=
> > Hi Team,
> >
> >
> > I have a high write traffic to my Cassandra cluster. I experience= a very high number of pending compactions. As I expect higher writes, The = pending compactions keep increasing. Even when I stop my writes it takes se= veral hours to finishing pending compactions.
> >
> > My CF is configured with LCS, with sstable_size_mb=3D20M. My CPU = is below 20%, JVM memory usage is between 45%-55%. I am using Cassandra 1.1= .9.
> >
> > How can I increase the compaction rate so it will run bit faster = to match my write speed?
> >
> > Your inputs are appreciated.
> >
> > Thanks,
> > Jay
> >
> >
> >
> >
> >
>
>




--089e01182fe48b69ee04db0ca14f--