incubator-cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Aaron Morton <aa...@thelastpickle.com>
Subject Re: Write performance with 1.2.12
Date Wed, 18 Dec 2013 02:31:24 GMT
> With a single node I get 3K for cassandra 1.0.12 and 1.2.12. So I suspect there is some
network chatter. I have started looking at the sources, hoping to find something.
1.2 is pretty stable, I doubt there is anything in there that makes it run slower than 1.0.
It’s probably something in your configuration or network.

Compare the local write time from nodetool cfhistograms and the request latency from nodetool
proxyhistograms. Writes request latency should be a bit below 1ms and local write latency
should be around .5 ms or better. if there is a wider difference between the two it’s wait
time + network time. 

As a general rule you should get around 3k to 4k writes per second per core.

Cheers


-----------------
Aaron Morton
New Zealand
@aaronmorton

Co-Founder & Principal Consultant
Apache Cassandra Consulting
http://www.thelastpickle.com

On 13/12/2013, at 8:06 pm, Rahul Menon <rahul@apigee.com> wrote:

> Quote from http://www.datastax.com/dev/blog/performance-improvements-in-cassandra-1-2
> 
> "Murmur3Partitioner is NOT compatible with RandomPartitioner, so if you’re upgrading
and using the new cassandra.yaml file, be sure to change the partitioner back to RandomPartitioner"
> 
> 
> On Thu, Dec 12, 2013 at 10:57 PM, srmore <comomore@gmail.com> wrote:
> 
> 
> 
> On Thu, Dec 12, 2013 at 11:15 AM, J. Ryan Earl <oss@jryanearl.us> wrote:
> Why did you switch to RandomPartitioner away from Murmur3Partitioner?  Have you tried
with Murmur3?
> 
> # partitioner: org.apache.cassandra.dht.Murmur3Partitioner
> partitioner: org.apache.cassandra.dht.RandomPartitioner
> 
> 
> Since I am comparing between the two versions I am keeping all the settings same. I see
> Murmur3Partitioner has some performance improvement but then switching back to
> RandomPartitioner should not cause performance to tank, right ? or am I missing something
? 
> 
> Also, is there an easier way to update the data from RandomPartitioner to Murmur3 ? (upgradesstable
?)
> 
> 
>  
> 
> On Fri, Dec 6, 2013 at 10:36 AM, srmore <comomore@gmail.com> wrote:
> 
> 
> 
> On Fri, Dec 6, 2013 at 9:59 AM, Vicky Kak <vicky.kak@gmail.com> wrote:
> You have passed the JVM configurations and not the cassandra configurations which is
in cassandra.yaml.
> 
> Apologies, was tuning JVM and that's what was in my mind. 
> Here are the cassandra settings http://pastebin.com/uN42GgYT
> 
>  
> The spikes are not that significant in our case and we are running the cluster with 1.7
gb heap.
> 
> Are these spikes causing any issue at your end?
> 
> There are no big spikes, the overall performance seems to be about 40% low.
>  
> 
> 
> 
> 
> On Fri, Dec 6, 2013 at 9:10 PM, srmore <comomore@gmail.com> wrote:
> 
> 
> 
> On Fri, Dec 6, 2013 at 9:32 AM, Vicky Kak <vicky.kak@gmail.com> wrote:
> Hard to say much without knowing about the cassandra configurations.
>  
> The cassandra configuration is 
> -Xms8G
> -Xmx8G
> -Xmn800m
> -XX:+UseParNewGC
> -XX:+UseConcMarkSweepGC
> -XX:+CMSParallelRemarkEnabled
> -XX:SurvivorRatio=4
> -XX:MaxTenuringThreshold=2
> -XX:CMSInitiatingOccupancyFraction=75
> -XX:+UseCMSInitiatingOccupancyOnly
> 
>  
> Yes compactions/GC's could skipe the CPU, I had similar behavior with my setup.
> 
> Were you able to get around it ?
>  
> 
> -VK
> 
> 
> On Fri, Dec 6, 2013 at 7:40 PM, srmore <comomore@gmail.com> wrote:
> We have a 3 node cluster running cassandra 1.2.12, they are pretty big machines 64G ram
with 16 cores, cassandra heap is 8G. 
> 
> The interesting observation is that, when I send traffic to one node its performance
is 2x more than when I send traffic to all the nodes. We ran 1.0.11 on the same box and we
observed a slight dip but not half as seen with 1.2.12. In both the cases we were writing
with LOCAL_QUORUM. Changing CL to ONE make a slight improvement but not much.
> 
> The read_Repair_chance is 0.1. We see some compactions running.
> 
> following is my iostat -x output, sda is the ssd (for commit log) and sdb is the spinner.
> 
> avg-cpu:  %user   %nice %system %iowait  %steal   %idle
>           66.46    0.00    8.95    0.01    0.00   24.58
> 
> Device:         rrqm/s   wrqm/s   r/s   w/s   rsec/s   wsec/s avgrq-sz avgqu-sz   await
 svctm  %util
> sda               0.00    27.60  0.00  4.40     0.00   256.00    58.18     0.01    2.55
  1.32   0.58
> sda1              0.00     0.00  0.00  0.00     0.00     0.00     0.00     0.00    0.00
  0.00   0.00
> sda2              0.00    27.60  0.00  4.40     0.00   256.00    58.18     0.01    2.55
  1.32   0.58
> sdb               0.00     0.00  0.00  0.00     0.00     0.00     0.00     0.00    0.00
  0.00   0.00
> sdb1              0.00     0.00  0.00  0.00     0.00     0.00     0.00     0.00    0.00
  0.00   0.00
> dm-0              0.00     0.00  0.00  0.00     0.00     0.00     0.00     0.00    0.00
  0.00   0.00
> dm-1              0.00     0.00  0.00  0.60     0.00     4.80     8.00     0.00    5.33
  2.67   0.16
> dm-2              0.00     0.00  0.00  0.00     0.00     0.00     0.00     0.00    0.00
  0.00   0.00
> dm-3              0.00     0.00  0.00 24.80     0.00   198.40     8.00     0.24    9.80
  0.13   0.32
> dm-4              0.00     0.00  0.00  6.60     0.00    52.80     8.00     0.01    1.36
  0.55   0.36
> dm-5              0.00     0.00  0.00  0.00     0.00     0.00     0.00     0.00    0.00
  0.00   0.00
> dm-6              0.00     0.00  0.00 24.80     0.00   198.40     8.00     0.29   11.60
  0.13   0.32
> 
> 
> 
> I can see I am cpu bound here but couldn't figure out exactly what is causing it, is
this caused by GC or Compaction ? I am thinking it is compaction, I see a lot of context switches
and interrupts in my vmstat output.
> 
> I don't see GC activity in the logs but see some compaction activity. Has anyone seen
this ? or know what can be done to free up the CPU.
> 
> Thanks,
> Sandeep
> 
> 
> 
> 
> 
> 
> 
> 
> 


Mime
View raw message