Return-Path: X-Original-To: apmail-cassandra-user-archive@www.apache.org Delivered-To: apmail-cassandra-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 8B861F149 for ; Sun, 28 Apr 2013 21:27:27 +0000 (UTC) Received: (qmail 39613 invoked by uid 500); 28 Apr 2013 21:27:25 -0000 Delivered-To: apmail-cassandra-user-archive@cassandra.apache.org Received: (qmail 39586 invoked by uid 500); 28 Apr 2013 21:27:25 -0000 Mailing-List: contact user-help@cassandra.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@cassandra.apache.org Delivered-To: mailing list user@cassandra.apache.org Received: (qmail 39577 invoked by uid 99); 28 Apr 2013 21:27:24 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Sun, 28 Apr 2013 21:27:24 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of al3xdm@gmail.com designates 209.85.220.44 as permitted sender) Received: from [209.85.220.44] (HELO mail-pa0-f44.google.com) (209.85.220.44) by apache.org (qpsmtpd/0.29) with ESMTP; Sun, 28 Apr 2013 21:27:20 +0000 Received: by mail-pa0-f44.google.com with SMTP id rl6so147081pac.31 for ; Sun, 28 Apr 2013 14:27:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:x-received:in-reply-to:references:date:message-id :subject:from:to:content-type; bh=TpZIUXvYHJeixcE4nza0yDUwEnmdVWulPxsaTKa8NxU=; b=if8vInW5n7X86ibqX2QPFs0UfXqmcOo7RQrO3sRkBba0RsZcGbeuI9luoZVfovY1hY Z12Uo4N+sv+cNiXDyxKyTHGwwwD8UipR6+yfXsHXujdiuKoupB+iMXTCrpZ2rkjnQ8KL IOSTQx2RxTCWHj6mtXeP686oFp1BjzLbrqQDSfeKXIvzTH9vgrw9jBaBCPalFRsWB1Ij oVy0nXlu6dn3275kTRTfwintixMHTe/2COxXVnzOEF6/QDvk3E3mLpFScC6F56Xs+O81 yStnt+QeCiBFIjj8yPZ9kpsdHapA/pp6GVRXD9Ct09Xmfh0OhlTKHrFwnLsb4S5FDpL7 /R/Q== MIME-Version: 1.0 X-Received: by 10.66.233.97 with SMTP id tv1mr40164967pac.96.1367184420564; Sun, 28 Apr 2013 14:27:00 -0700 (PDT) Received: by 10.68.52.167 with HTTP; Sun, 28 Apr 2013 14:27:00 -0700 (PDT) In-Reply-To: <570AE43E-06B1-4316-A804-E49D54266811@yahoo.com> References: <39D73F09-23D5-45A5-8DEF-B45BBDFC4BE8@thelastpickle.com> <64F68A8D-C846-47C5-AFC4-DBE047680EFB@yahoo.com> <9395655F-AB77-4C10-856F-FC521CD18A1E@yahoo.com> <570AE43E-06B1-4316-A804-E49D54266811@yahoo.com> Date: Sun, 28 Apr 2013 22:27:00 +0100 Message-ID: Subject: Re: Really odd issue (AWS related?) From: Alex Major To: "user@cassandra.apache.org" Content-Type: multipart/alternative; boundary=047d7b15a417baf57804db726d1c X-Virus-Checked: Checked by ClamAV on apache.org --047d7b15a417baf57804db726d1c Content-Type: text/plain; charset=ISO-8859-1 Hi Mike, We had issues with the ephemeral drives when we first got started, although we never got to the bottom of it so I can't help much with troubleshooting unfortunately. Contrary to a lot of the comments on the mailing list we've actually had a lot more success with EBS drives (PIOPs!). I'd definitely suggest try striping 4 EBS drives (Raid 0) and using PIOPs. You could be having a noisy neighbour problem, I don't believe that m1.large or m1.xlarge instances get all of the actual hardware, virtualisation on EC2 still sucks in isolating resources. We've also had more success with Ubuntu on EC2, not so much with our Cassandra nodes but some of our other services didn't run as well on Amazon Linux AMIs. Alex On Sun, Apr 28, 2013 at 7:12 PM, Michael Theroux wrote: > I forgot to mention, > > When things go really bad, I'm seeing I/O waits in the 80->95% range. I > restarted cassandra once when a node is in this situation, and it took 45 > minutes to start (primarily reading SSTables). Typically, a node would > start in about 5 minutes. > > Thanks, > -Mike > > On Apr 28, 2013, at 12:37 PM, Michael Theroux wrote: > > Hello, > > We've done some additional monitoring, and I think we have more > information. We've been collecting vmstat information every minute, > attempting to catch a node with issues,. > > So, it appears, that the cassandra node runs fine. Then suddenly, without > any correlation to any event that I can identify, the I/O wait time goes > way up, and stays up indefinitely. Even non-cassandra I/O activities > (such as snapshots and backups) start causing large I/O Wait times when > they typically would not. Previous to an issue, we would typically see I/O > wait times 3-4% with very few blocked processes on I/O. Once this issue > manifests itself, i/O wait times for the same activities jump to 30-40% > with many blocked processes. The I/O wait times do go back down when there > is literally no activity. > > - Updating the node to the latest Amazon Linux patches and rebooting the > instance doesn't correct the issue. > - Backing up the node, and replacing the instance does correct the issue. > I/O wait times return to normal. > > One relatively recent change we've made is we upgraded to m1.xlarge > instances which has 4 ephemeral drives available. We create a logical > volume from the 4 drives with the idea that we should be able to get > increased I/O throughput. When we ran m1.large instances, we had the same > setup, although it was only using 2 ephemeral drives. We chose to use LVM, > vs. madm because we were having issues having madm create the raid volume > reliably on restart (and research showed that this was a common problem). > LVM just worked (and had worked for months before this upgrade).. > > For reference, this is the script we used to create the logical volume: > > vgcreate mnt_vg /dev/sdb /dev/sdc /dev/sdd /dev/sde > lvcreate -L 1600G -n mnt_lv -i 4 mnt_vg -I 256K > blockdev --setra 65536 /dev/mnt_vg/mnt_lv > sleep 2 > mkfs.xfs /dev/mnt_vg/mnt_lv > sleep 3 > mkdir -p /data && mount -t xfs -o noatime /dev/mnt_vg/mnt_lv /data > sleep 3 > > Another tidbit... thus far (and this maybe only a coincidence), we've only > had to replace DB nodes within a single availability zone within us-east. > Other availability zones, in the same region, have yet to show an issue. > > It looks like I'm going to need to replace a third DB node today. Any > advice would be appreciated. > > Thanks, > -Mike > > > On Apr 26, 2013, at 10:14 AM, Michael Theroux wrote: > > Thanks. > > We weren't monitoring this value when the issue occurred, and this > particular issue has not appeared for a couple of days (knock on wood). > Will keep an eye out though, > > -Mike > > On Apr 26, 2013, at 5:32 AM, Jason Wee wrote: > > top command? st : time stolen from this vm by the hypervisor > > jason > > > On Fri, Apr 26, 2013 at 9:54 AM, Michael Theroux wrote: > >> Sorry, Not sure what CPU steal is :) >> >> I have AWS console with detailed monitoring enabled... things seem to >> track close to the minute, so I can see the CPU load go to 0... then jump >> at about the minute Cassandra reports the dropped messages, >> >> -Mike >> >> On Apr 25, 2013, at 9:50 PM, aaron morton wrote: >> >> The messages appear right after the node "wakes up". >> >> Are you tracking CPU steal ? >> >> ----------------- >> Aaron Morton >> Freelance Cassandra Consultant >> New Zealand >> >> @aaronmorton >> http://www.thelastpickle.com >> >> On 25/04/2013, at 4:15 AM, Robert Coli wrote: >> >> On Wed, Apr 24, 2013 at 5:03 AM, Michael Theroux >> wrote: >> >> Another related question. Once we see messages being dropped on one >> node, our cassandra client appears to see this, reporting errors. We use >> LOCAL_QUORUM with a RF of 3 on all queries. Any idea why clients would see >> an error? If only one node reports an error, shouldn't the consistency >> level prevent the client from seeing an issue? >> >> >> If the client is talking to a broken/degraded coordinator node, RF/CL >> are unable to protect it from RPCTimeout. If it is unable to >> coordinate the request in a timely fashion, your clients will get >> errors. >> >> =Rob >> >> >> >> > > > > --047d7b15a417baf57804db726d1c Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable
Hi Mike,

We had issues with the= =A0ephemeral=A0drives when we first got started, although we never got to t= he bottom of it so I can't help much with troubleshooting unfortunately= . Contrary to a lot of the comments on the mailing list we've actually = had a lot more success with EBS drives (PIOPs!). I'd definitely suggest= try striping 4 EBS drives (Raid 0) and using PIOPs.

You could be having a noisy neighbour probl= em, I don't believe that m1.large or m1.xlarge instances get all of the= actual hardware, virtualisation on EC2 still sucks in isolating resources.=

We've also had more success with Ubuntu= on EC2, not so much with our Cassandra nodes but some of our other service= s didn't run as well on Amazon Linux AMIs.

Alex

=

On Sun, Apr 28, 2013 at 7:12 PM, Michael= Theroux <mtheroux2@yahoo.com> wrote:
I forgot= to mention,

When things go really bad, I'm seeing I= /O waits in the 80->95% range. =A0I restarted cassandra once when a node= is in this situation, and it took 45 minutes to start (primarily reading S= STables). =A0Typically, a node would start in about 5 minutes.

Thanks,
-Mike
=A0
On Apr 28, 2013, at 12:37 PM, Michael Theroux wro= te:

= Hello,

We've done some additional monitoring, and I think we ha= ve more information. =A0We've been collecting vmstat information every = minute, attempting to catch =A0a node with issues,.

So, it appears, that the cassandra node runs fine. =A0Then suddenly, withou= t any correlation to any event that I can identify, the I/O wait time goes = way up, and stays up indefinitely. =A0Even non-cassandra =A0I/O activities = (such as snapshots and backups) start causing large I/O Wait times when the= y typically would not. =A0Previous to an issue, we would typically see I/O = wait times 3-4% with very few blocked processes on I/O. =A0Once this issue = manifests itself, i/O wait times for the same activities jump to 30-40% wit= h many blocked processes. =A0The I/O wait times do go back down when there = is literally no activity. =A0=A0

- =A0Updating the node to the latest Amazon Linux patch= es and rebooting the instance doesn't correct the issue.
- = =A0Backing up the node, and replacing the instance does correct the issue. = =A0I/O wait times return to normal.

One relatively recent change we've made is we upgra= ded to m1.xlarge instances which has 4 ephemeral drives available. =A0We cr= eate a logical volume from the 4 drives with the idea that we should be abl= e to get increased I/O throughput. =A0When we ran m1.large instances, we ha= d the same setup, although it was only using 2 ephemeral drives. =A0We chos= e to use LVM, vs. madm because we were having issues having madm create the= raid volume reliably on restart (and research showed that this was a commo= n problem). =A0LVM just worked (and had worked for months before this upgra= de)..

For reference, this is the script we used to create the= logical volume:

vgcreate mnt_vg /dev/sdb /de= v/sdc /dev/sdd /dev/sde
lvcreate -L 1600G -n mnt_lv -i 4 mnt_vg -= I 256K
blockdev --setra 65536 /dev/mnt_vg/mnt_lv
sleep 2
= mkfs.xfs /dev/mnt_vg/mnt_lv
sleep 3
mkdir -p /data &= ;& mount -t xfs -o noatime /dev/mnt_vg/mnt_lv /data
sleep 3

Another tidbit... thus far (and this maybe only a coinc= idence), we've only had to replace DB nodes within a single availabilit= y zone within us-east. =A0Other availability zones, in the same region, hav= e yet to show an issue.

It looks like I'm going to need to replace a third = DB node today. =A0Any advice would be appreciated.

Thanks,
-Mike


On Ap= r 26, 2013, at 10:14 AM, Michael Theroux wrote:

Thanks.
We weren't monitoring this value when the issue occur= red, and this particular issue has not appeared for a couple of days (knock= on wood). =A0Will keep an eye out though,

-Mike

On Apr 26, 2013, at 5:32 = AM, Jason Wee wrote:

top command? st : time stolen from this vm by the hypervisor
<= br>
jason


On Fri, Apr 26, 2013 at 9:54 AM, Michael Theroux <mther= oux2@yahoo.com> wrote:
Sorry, N= ot sure what CPU steal is :)

I have AWS console with det= ailed monitoring enabled... things seem to track close to the minute, so I = can see the CPU load go to 0... then jump at about the minute Cassandra rep= orts the dropped messages,

-Mike

On Apr 25, 2013= , at 9:50 PM, aaron morton wrote:

The messages appear= right after the node "wakes up".
Are you tracking CPU steal ?=A0

-----------------
Aaron Morton
Freelance Cassandra= Consultant
New Zealand


On 25/04/2013, at 4:15 AM, Robert Coli <rcoli@eventbrite.com> wrote:=

On Wed, Apr 24, 2013 at 5:03 AM, Michae= l Theroux <mthe= roux2@yahoo.com> wrote:
Another related question. =A0Once we see messages= being dropped on one node, our cassandra client appears to see this, repor= ting errors. =A0We use LOCAL_QUORUM with a RF of 3 on all queries. =A0Any i= dea why clients would see an error? =A0If only one node reports an error, s= houldn't the consistency level prevent the client from seeing an issue?=

If the client is talking to a broken/degraded coordinator = node, RF/CL
are unable to protect it from RPCTimeout. If it is unable to=
coordinate the request in a timely fashion, your clients will get
errors.

=3DRob







--047d7b15a417baf57804db726d1c--