cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Dave Viner <davevi...@pobox.com>
Subject Re: RackAwareStrategy vs RackUnAwareStrategy on AWS EC2 cloud
Date Fri, 09 Jul 2010 17:44:37 GMT
Hi,

Can you post the stress test code and storage.conf used?

I have a cluster in EC2 using RackAware.  However, I am in 1 region
(us-east-1) but 2 Availability Zones.  Amazon helps to ensure that AZ's are
isolated from each other creating a fail-resistant cluster.  But, staying in
the same region allows for higher thruput numbers.

Dave Viner


On Fri, Jul 9, 2010 at 10:36 AM, maneela a <maneelia@yahoo.com> wrote:

> Are there any known performance issues if cassandra cluster launched with
> RackAwareStrategy because I see huge performance difference between
> RackAwareStrategy vs RackUnAwareStrategy.  Here are details:
>
>
>
> we have a cluster setup with 4 EC2 X large nodes, 3 of them are running in
> East region and 4th one is running in West region and they all communicate
> with each other through VPN tunnel interface which is only way we found to
> achieve ring architecture across Amazon cloud regions:
>
>
>
> we are able to process 3.5K write operations per second when we used
> RackUnAwareStrategy whereas
>
>
> :/home/ubuntu/cassandra/contrib/py_stress# ./stress.py -o insert -n 80000
> -y regular -d ec2-xxx-xxx-xxx-xx.compute-1.amazonaws.com --threads 100
> --keep-going
>
> total,interval_op_rate,avg_latency,elapsed_time
>
> 35935,3593,0.0289930914479,10
>
> 70531,3459,0.0289145907593,20
>
> 80000,946,0.0267288666213,30
>
>
> whereas we are able to process only 250 write operations per second when we
> used RackAwareStrategy
>
>
> :/home/ubuntu/cassandra/contrib/py_stress# ./stress.py -o insert -n 80000
> -y regular -d ec2-xxx-xxx-xxx-xx.compute-1.amazonaws.com --threads 100
> --keep-going
>
> total,interval_op_rate,avg_latency,elapsed_time
>
> 2327,232,0.434396038355,10
>
> 4772,244,0.40946514036,20
>
> 7383,261,0.384504625415,30
>
> 9924,254,0.392919449861,40
>
> 12525,260,0.383832110482,50
>
> 15158,263,0.378838069983,60
>
> 17784,262,0.383219807364,70
>
> 20416,263,0.381646275973,80
>
> 23030,261,0.382550528602,90
>
> 25644,261,0.384442176815,100
>
> 28268,262,0.380935921084,110
>
> 30910,264,0.377376309224,120
>
> 33541,263,0.385158945698,130
>
> 36119,257,0.387976026517,140
>
> 38735,261,0.382333525368,150
>
> 41342,260,0.38413751514,160
>
> 43925,258,0.387684800391,170
>
> 46642,271,0.36899637237,180
>
> 49291,264,0.378489510164,190
>
> 51931,264,0.3793784538,200
>
> 54573,264,0.378474057217,210
>
> 57253,268,0.374258003573,220
>
> 59884,263,0.380020038658,230
>
> 62484,260,0.387267011954,240
>
> 64728,224,0.439328571054,250
>
> 67340,261,0.389221810455,260
>
> 69920,258,0.386144905127,270
>
> 72531,261,0.384242234948,280
>
> 75202,267,0.372129596605,290
>
> 77843,264,0.354621512291,300
>
> 80000,215,0.183918378283,310
>
> Thanks in advance
>
> Niru
>
>
>

Mime
View raw message