incubator-cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jay Svc <jaytechg...@gmail.com>
Subject Re: How to make compaction run faster?
Date Thu, 18 Apr 2013 19:16:07 GMT
Thanks Aaron,

Please find answers to your questions.

1. I started test with default parameters the compaction is backing up. So
went for various options.
2. The data is on RAID10.
3. As I watched Disk latency on DSE Opscenter as well as on iostat the
await is always 35 to 40 ms for longer period of time during the test.
(which probably gives me high write latency on client side) Do you think
this could contribute to slowing down the compaction? probably not..!

So Aaron, I am trying to understand -
You are suggesting to go back to STCS and increase the
compaction_throughput step by step to see if compaction catch up with write
traffic?

Thank you for your inputs.

Regards,
Jay


On Thu, Apr 18, 2013 at 1:52 PM, aaron morton <aaron@thelastpickle.com>wrote:

> > Parameters used:
> >       • SSTable size: 500MB (tried various sizes from 20MB to 1GB)
> >       • Compaction throughput mb per sec: 250MB (tried from 16MB to
> 640MB)
> >       • Concurrent write: 196 (tried from 32 to 296)
> >       • Concurrent compactors: 72 (tried disabling to making it 172)
> >       • Multithreaded compaction: true (tried both true and false)
> >       • Compaction strategy: LCS (tried STCS as well)
> >       • Memtable total space in mb: 4096 MB (tried default and some
> other params too)
> I would restore to default settings before I did anything else.
>
> > Aaron, Please find the iostat below: the sdb and dm-2 are the commitlog
> disks.
> > Please find the iostat of some of 3 different boxes in my cluster.
>
> What is the data on ?
> It's important to call iostat with a period and watch the await / queue
> size of time. Not just view a snapshot.
>
> I would go back to STS with default settings, and ramp up the throughput
> until compaction cannot keep up. Then increase the throughout and see how
> that works. Then increase throughput again and see what happens.
>
> Cheers
>
>
> -----------------
> Aaron Morton
> Freelance Cassandra Consultant
> New Zealand
>
> @aaronmorton
> http://www.thelastpickle.com
>
> On 19/04/2013, at 5:05 AM, Jay Svc <jaytechgeek@gmail.com> wrote:
>
> > Hi Aaron, Alexis,
> >
> > Thanks for reply, Please find some more details below.
> >
> > Core problems: Compaction is taking longer time to finish. So it will
> affect my reads. I have more CPU and memory, want to utilize that to speed
> up the compaction process.
> > Parameters used:
> >       • SSTable size: 500MB (tried various sizes from 20MB to 1GB)
> >       • Compaction throughput mb per sec: 250MB (tried from 16MB to
> 640MB)
> >       • Concurrent write: 196 (tried from 32 to 296)
> >       • Concurrent compactors: 72 (tried disabling to making it 172)
> >       • Multithreaded compaction: true (tried both true and false)
> >       • Compaction strategy: LCS (tried STCS as well)
> >       • Memtable total space in mb: 4096 MB (tried default and some
> other params too)
> > Note: I have tried almost all permutation combination of these
> parameters.
> > Observations:
> > I ran test for 1.15 hrs with writes at the rate of 21000
> records/sec(total 60GB data during 1.15 hrs). After I stopped the test
> > compaction took additional 1.30 hrs to finish compaction, that reduced
> the SSTable count from 170 to 17.
> > CPU(24 cores): almost 80% idle during the run
> > JVM: 48G RAM, 8G Heap, (3G to 5G heap used)
> > Pending Writes: sometimes high spikes for small amount of time otherwise
> pretty flat
> > Aaron, Please find the iostat below: the sdb and dm-2 are the commitlog
> disks.
> > Please find the iostat of some of 3 different boxes in my cluster.
> > -bash-4.1$ iostat -xkcd
> > Linux 2.6.32-358.2.1.el6.x86_64 (edc-epod014-dl380-3) 04/18/2013
> _x86_64_ (24 CPU)
> > avg-cpu: %user %nice %system %iowait %steal %idle
> > 1.20 1.11 0.59 0.01 0.00 97.09
> > Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm
> %util
> > sda 0.03 416.56 9.00 7.08 1142.49 1694.55 352.88 0.07 4.08 0.57 0.92
> > sdb 0.00 172.38 0.08 3.34 10.76 702.89 416.96 0.09 24.84 0.94 0.32
> > dm-0 0.00 0.00 0.03 0.75 0.62 3.00 9.24 0.00 1.45 0.33 0.03
> > dm-1 0.00 0.00 0.00 0.00 0.00 0.00 8.00 0.00 0.74 0.68 0.00
> > dm-2 0.00 0.00 0.08 175.72 10.76 702.89 8.12 3.26 18.49 0.02 0.32
> > dm-3 0.00 0.00 0.00 0.00 0.00 0.00 7.97 0.00 0.83 0.62 0.00
> > dm-4 0.00 0.00 8.99 422.89 1141.87 1691.55 13.12 4.64 10.71 0.02 0.90
> > -bash-4.1$ iostat -xkcd
> > Linux 2.6.32-358.2.1.el6.x86_64 (ndc-epod014-dl380-1) 04/18/2013
> _x86_64_ (24 CPU)
> > avg-cpu: %user %nice %system %iowait %steal %idle
> > 1.20 1.12 0.52 0.01 0.00 97.14
> > Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await svc
> > sda 0.01 421.17 9.22 7.43 1167.81 1714.38 346.10 0.07 3.99 0.
> > sdb 0.00 172.68 0.08 3.26 10.52 703.74 427.79 0.08 25.01 0.
> > dm-0 0.00 0.00 0.04 1.04 0.89 4.16 9.34 0.00 2.58 0.
> > dm-1 0.00 0.00 0.00 0.00 0.00 0.00 8.00 0.00 0.77 0.
> > dm-2 0.00 0.00 0.08 175.93 10.52 703.74 8.12 3.13 17.78 0.
> > dm-3 0.00 0.00 0.00 0.00 0.00 0.00 7.97 0.00 1.14 0.
> > dm-4 0.00 0.00 9.19 427.55 1166.91 1710.21 13.18 4.67 10.65 0.
> > -bash-4.1$ iostat -xkcd
> > Linux 2.6.32-358.2.1.el6.x86_64 (edc-epod014-dl380-1) 04/18/2013
> _x86_64_ (24 CPU)
> > avg-cpu: %user %nice %system %iowait %steal %idle
> > 1.15 1.13 0.52 0.01 0.00 97.19
> > Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm
> %util
> > sda 0.02 429.97 9.28 7.29 1176.81 1749.00 353.12 0.07 4.10 0.55 0.91
> > sdb 0.00 173.65 0.08 3.09 10.50 706.96 452.25 0.09 27.23 0.99 0.31
> > dm-0 0.00 0.00 0.04 0.79 0.82 3.16 9.61 0.00 1.54 0.27 0.02
> > dm-1 0.00 0.00 0.00 0.00 0.00 0.00 8.00 0.00 0.68 0.63 0.00
> > dm-2 0.00 0.00 0.08 176.74 10.50 706.96 8.12 3.46 19.53 0.02 0.31
> > dm-3 0.00 0.00 0.00 0.00 0.00 0.00 7.97 0.00 0.85 0.83 0.00
> > dm-4 0.00 0.00 9.26 436.46 1175.98 1745.84 13.11 0.03 0.03 0.02 0.89
> > Thanks,
> > Jay
> >
> >
> > On Thu, Apr 18, 2013 at 2:50 AM, aaron morton <aaron@thelastpickle.com>
> wrote:
> > > I believe that compaction occurs on the data directories and not in
> the commitlog.
> > Yes, compaction only works on the data files.
> >
> > > When I ran iostat; I see "await" 26ms to 30 ms for my commit log disk.
> My CPU is less than 18% used.
> > >
> > > How I reduce the disk latency for my commit log disk. They are SSDs.
> > That does not sound right. Can you include the output from iostat for
> the commit log and data volumes. Also some information on how many writes
> you are processing the the size of rows as well.
> >
> > Cheers
> >
> > -----------------
> > Aaron Morton
> > Freelance Cassandra Consultant
> > New Zealand
> >
> > @aaronmorton
> > http://www.thelastpickle.com
> >
> > On 18/04/2013, at 11:58 AM, Alexis Rodríguez <arodriguez@inconcertcc.com>
> wrote:
> >
> > > Jay,
> > >
> > > I believe that compaction occurs on the data directories and not in
> the commitlog.
> > >
> > > http://wiki.apache.org/cassandra/MemtableSSTable
> > >
> > >
> > >
> > >
> > > On Wed, Apr 17, 2013 at 7:58 PM, Jay Svc <jaytechgeek@gmail.com>
> wrote:
> > > Hi Alexis,
> > >
> > > Thank you for your response.
> > >
> > > My commit log is on SSD. which shows me 30 to 40 ms of disk latency.
> > >
> > > When I ran iostat; I see "await" 26ms to 30 ms for my commit log disk.
> My CPU is less than 18% used.
> > >
> > > How I reduce the disk latency for my commit log disk. They are SSDs.
> > >
> > > Thank you in advance,
> > > Jay
> > >
> > >
> > > On Wed, Apr 17, 2013 at 3:58 PM, Alexis Rodríguez <
> arodriguez@inconcertcc.com> wrote:
> > > :D
> > >
> > > Jay, check if your disk(s) utilization allows you to change the
> configuration the way Edward suggest. iostat -xkcd 1 will show you how much
> of your disk(s) are in use.
> > >
> > >
> > >
> > >
> > > On Wed, Apr 17, 2013 at 5:26 PM, Edward Capriolo <
> edlinuxguru@gmail.com> wrote:
> > > three things:
> > > 1) compaction throughput is fairly low (yaml nodetool)
> > > 2) concurrent compactions is fairly low (yaml)
> > > 3) multithreaded compaction might be off in your version
> > >
> > > Try raising these things. Otherwise consider option 4.
> > >
> > > 4)$$$$$$$$$$$$$$$$$$$$$$$ RAID,RAM<CPU$$$$$$$$$$$$$$
> > >
> > >
> > > On Wed, Apr 17, 2013 at 4:01 PM, Jay Svc <jaytechgeek@gmail.com>
> wrote:
> > > Hi Team,
> > >
> > >
> > > I have a high write traffic to my Cassandra cluster. I experience a
> very high number of pending compactions. As I expect higher writes, The
> pending compactions keep increasing. Even when I stop my writes it takes
> several hours to finishing pending compactions.
> > >
> > > My CF is configured with LCS, with sstable_size_mb=20M. My CPU is
> below 20%, JVM memory usage is between 45%-55%. I am using Cassandra 1.1.9.
> > >
> > > How can I increase the compaction rate so it will run bit faster to
> match my write speed?
> > >
> > > Your inputs are appreciated.
> > >
> > > Thanks,
> > > Jay
> > >
> > >
> > >
> > >
> > >
> >
> >
>
>

Mime
View raw message