Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id CBEF9200B49 for ; Wed, 3 Aug 2016 12:50:25 +0200 (CEST) Received: by cust-asf.ponee.io (Postfix) id CA67F160A86; Wed, 3 Aug 2016 10:50:25 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id 9A776160A64 for ; Wed, 3 Aug 2016 12:50:24 +0200 (CEST) Received: (qmail 89882 invoked by uid 500); 3 Aug 2016 10:50:23 -0000 Mailing-List: contact user-help@cassandra.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@cassandra.apache.org Delivered-To: mailing list user@cassandra.apache.org Received: (qmail 89872 invoked by uid 99); 3 Aug 2016 10:50:23 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd2-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 03 Aug 2016 10:50:23 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd2-us-west.apache.org (ASF Mail Server at spamd2-us-west.apache.org) with ESMTP id B7B6D1A0C3B for ; Wed, 3 Aug 2016 10:50:22 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd2-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: 1.179 X-Spam-Level: * X-Spam-Status: No, score=1.179 tagged_above=-999 required=6.31 tests=[DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, HTML_MESSAGE=2, RCVD_IN_DNSWL_LOW=-0.7, RCVD_IN_MSPIKE_H3=-0.01, RCVD_IN_MSPIKE_WL=-0.01, SPF_PASS=-0.001] autolearn=disabled Authentication-Results: spamd2-us-west.apache.org (amavisd-new); dkim=pass (2048-bit key) header.d=gmail.com Received: from mx2-lw-us.apache.org ([10.40.0.8]) by localhost (spamd2-us-west.apache.org [10.40.0.9]) (amavisd-new, port 10024) with ESMTP id lujPAFvq--K5 for ; Wed, 3 Aug 2016 10:50:19 +0000 (UTC) Received: from mail-ua0-f181.google.com (mail-ua0-f181.google.com [209.85.217.181]) by mx2-lw-us.apache.org (ASF Mail Server at mx2-lw-us.apache.org) with ESMTPS id B591E5FBBE for ; Wed, 3 Aug 2016 10:50:19 +0000 (UTC) Received: by mail-ua0-f181.google.com with SMTP id k90so149492450uak.0 for ; Wed, 03 Aug 2016 03:50:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:from:date:message-id:subject:to; bh=BgQqrON4C6uZa/4X3Q6tDgqd7C9eecLjbxPSz7GgJIY=; b=FTyTYqQBDBWCOARyseza2IQ6HrHKUyRq8+JU/T36uJCxumuBj/rwAvrFscIINq6kdU GOC9u7SQrpvY+Mz6t9zL6xKuVH+HnX3FmrZzEWz3KOKxZEXTZt6n01mU0n3EsdpK88kW rAuTOwij1NN76cM+GidEPxixFobZoZZ1fxeleX1qCLNKoj9F4yiuGMcE4oO8GTTy0nkJ ilYlsi0og9OPs1EPEKimx62aX3Oky1qb+b4DcknPd+89PsEcYfQ6DKPud1FRJpX7pIWM FtiM+mEAFyPVevfmgUtQLXlCsUT7HTFrE7FjToBbsNYmL8GsPwAVdmoZA1Yl9yTOSL9V rrWw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:from:date :message-id:subject:to; bh=BgQqrON4C6uZa/4X3Q6tDgqd7C9eecLjbxPSz7GgJIY=; b=chtFHrDaFRESSQunpba4DuHgFuGMeFAQxghnL41f6mr0t+edIQud2eCVZCQL/3eVFA 9l3iUK6BtvIni5O1S9cUXZ/huMLpsWv/ssFge+Wzl3GoEdc0jlXiuug9rHk2KEv9F9i5 ZHpJFM+2KheiyN9anb9ngLeHUgriJlDXsBZFeLSADzRRXmUZ1K1YvkXKzccp1iXawjS8 TCcisZ2RpGNzPUJsc6RXelNOXFSDuFR5pfri9K/mt/GG3dAyvBeUZPxVE8Y6rQIphrtb Px74ePLnH7HU4Om+0I7pz9jqZBe5GBeV2d/0Ofd+fa3/BwxgB2Ij1g7jWz33GIc2QksW 1pkw== X-Gm-Message-State: AEkoouuTy+xlQUxwbL5FZDCrkwBvToxEFy2nBNHYu81HE7EXrFP6maFbwkp12gRZSaL2xMADRVQYgI/hkBNDAQ== X-Received: by 10.159.33.168 with SMTP id 37mr34456198uac.142.1470221418987; Wed, 03 Aug 2016 03:50:18 -0700 (PDT) MIME-Version: 1.0 Received: by 10.103.125.134 with HTTP; Wed, 3 Aug 2016 03:49:58 -0700 (PDT) In-Reply-To: References: From: DuyHai Doan Date: Wed, 3 Aug 2016 12:49:58 +0200 Message-ID: Subject: Re: Memory leak and lockup on our 2.2.7 Cassandra cluster. To: user@cassandra.apache.org Content-Type: multipart/alternative; boundary=001a113df53c6b85a805392899b6 archived-at: Wed, 03 Aug 2016 10:50:26 -0000 --001a113df53c6b85a805392899b6 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable On a side node, do you monitor your disk I/O to see whether the disk bandwidth can catch up with the huge spikes in write ? Use dstat during the insert storm to see if you have big values for CPU wait On Wed, Aug 3, 2016 at 12:41 PM, Ben Slater wrote: > Yes, looks like you have a (at least one) 100MB partition which is big > enough to cause issues. When you do lots of writes to the large partition > it is likely to end up getting compacted (as per the log) and compactions > often use a lot of memory / cause a lot of GC when they hit large > partitions. This, in addition to the write load is probably pushing you > over the edge. > > There are some improvements in 3.6 that might help ( > https://issues.apache.org/jira/browse/CASSANDRA-11206) but the 2.2 to 3.x > upgrade path seems risky at best at the moment. In any event, your best > solution would be to find a way to make your partitions smaller (like > 1/10th of the size). > > Cheers > Ben > > > On Wed, 3 Aug 2016 at 12:35 Kevin Burton wrote: > >> I have a theory as to what I think is happening here. >> >> There is a correlation between the massive content all at once, and our >> outags. >> >> Our scheme uses large buckets of content where we write to a >> bucket/partition for 5 minutes, then move to a new one. This way we can >> page through buckets. >> >> I think what's happening is that CS is reading the entire partition into >> memory, then slicing through it... which would explain why its running o= ut >> of memory. >> >> system.log:WARN [CompactionExecutor:294] 2016-08-03 02:01:55,659 >> BigTableWriter.java:184 - Writing large partition >> blogindex/content_legacy_2016_08_02:1470154500099 (106107128 bytes) >> >> On Tue, Aug 2, 2016 at 6:43 PM, Kevin Burton wrote: >> >>> We have a 60 node CS cluster running 2.2.7 and about 20GB of RAM >>> allocated to each C* node. We're aware of the recommended 8GB limit to >>> keep GCs low but our memory has been creeping up (probably) related to = this >>> bug. >>> >>> Here's what we're seeing... if we do a low level of writes we think >>> everything generally looks good. >>> >>> What happens is that we then need to catch up and then do a TON of >>> writes all in a small time window. Then CS nodes start dropping like >>> flies. Some of them just GC frequently and are able to recover. When t= hey >>> GC like this we see GC pause in the 30 second range which then cause th= em >>> to not gossip for a while and they drop out of the cluster. >>> >>> This happens as a flurry around the cluster so we're not always able to >>> catch which ones are doing it as they recover. However, if we have 3 do= wn, >>> we mostly have a locked up cluster. Writes don't complete and our app >>> essentially locks up. >>> >>> SOME of the boxes never recover. I'm in this state now. We have t3-5 >>> nodes that are in GC storms which they won't recover from. >>> >>> I reconfigured the GC settings to enable jstat. >>> >>> I was able to catch it while it was happening: >>> >>> ^Croot@util0067 ~ # sudo -u cassandra jstat -gcutil 4235 2500 >>> S0 S1 E O M CCS YGC YGCT FGC FGCT >>> GCT >>> 0.00 100.00 100.00 94.76 97.60 93.06 10435 1686.191 471 1139.14= 2 >>> 2825.332 >>> 0.00 100.00 100.00 94.76 97.60 93.06 10435 1686.191 471 1139.14= 2 >>> 2825.332 >>> 0.00 100.00 100.00 94.76 97.60 93.06 10435 1686.191 471 1139.14= 2 >>> 2825.332 >>> 0.00 100.00 100.00 94.76 97.60 93.06 10435 1686.191 471 1139.14= 2 >>> 2825.332 >>> 0.00 100.00 100.00 94.76 97.60 93.06 10435 1686.191 471 1139.14= 2 >>> 2825.332 >>> 0.00 100.00 100.00 94.76 97.60 93.06 10435 1686.191 471 1139.14= 2 >>> 2825.332 >>> >>> ... as you can see the box is legitimately out of memory. S0, S1, E an= d >>> O are all completely full. >>> >>> I'm not sure were to go from here. I think 20GB for our work load is >>> more than reasonable. >>> >>> 90% of the time they're well below 10GB of RAM used. While I was >>> watching this box I was seeing 30% RAM used until it decided to climb t= o >>> 100% >>> >>> Any advice on what do do next... I don't see anything obvious in the >>> logs to signal a problem. >>> >>> I attached all the command line arguments we use. Note that I think >>> that the cassandra-env.sh script puts them in there twice. >>> >>> -ea >>> -javaagent:/usr/share/cassandra/lib/jamm-0.3.0.jar >>> -XX:+CMSClassUnloadingEnabled >>> -XX:+UseThreadPriorities >>> -XX:ThreadPriorityPolicy=3D42 >>> -Xms20000M >>> -Xmx20000M >>> -Xmn4096M >>> -XX:+HeapDumpOnOutOfMemoryError >>> -Xss256k >>> -XX:StringTableSize=3D1000003 >>> -XX:+UseParNewGC >>> -XX:+UseConcMarkSweepGC >>> -XX:+CMSParallelRemarkEnabled >>> -XX:SurvivorRatio=3D8 >>> -XX:MaxTenuringThreshold=3D1 >>> -XX:CMSInitiatingOccupancyFraction=3D75 >>> -XX:+UseCMSInitiatingOccupancyOnly >>> -XX:+UseTLAB >>> -XX:CompileCommandFile=3D/hotspot_compiler >>> -XX:CMSWaitDuration=3D10000 >>> -XX:+CMSParallelInitialMarkEnabled >>> -XX:+CMSEdenChunksRecordAlways >>> -XX:CMSWaitDuration=3D10000 >>> -XX:+UseCondCardMark >>> -XX:+PrintGCDetails >>> -XX:+PrintGCDateStamps >>> -XX:+PrintHeapAtGC >>> -XX:+PrintTenuringDistribution >>> -XX:+PrintGCApplicationStoppedTime >>> -XX:+PrintPromotionFailure >>> -XX:PrintFLSStatistics=3D1 >>> -Xloggc:/var/log/cassandra/gc.log >>> -XX:+UseGCLogFileRotation >>> -XX:NumberOfGCLogFiles=3D10 >>> -XX:GCLogFileSize=3D10M >>> -Djava.net.preferIPv4Stack=3Dtrue >>> -Dcom.sun.management.jmxremote.port=3D7199 >>> -Dcom.sun.management.jmxremote.rmi.port=3D7199 >>> -Dcom.sun.management.jmxremote.ssl=3Dfalse >>> -Dcom.sun.management.jmxremote.authenticate=3Dfalse >>> -Djava.library.path=3D/usr/share/cassandra/lib/sigar-bin >>> -XX:+UnlockCommercialFeatures >>> -XX:+FlightRecorder >>> -ea >>> -javaagent:/usr/share/cassandra/lib/jamm-0.3.0.jar >>> -XX:+CMSClassUnloadingEnabled >>> -XX:+UseThreadPriorities >>> -XX:ThreadPriorityPolicy=3D42 >>> -Xms20000M >>> -Xmx20000M >>> -Xmn4096M >>> -XX:+HeapDumpOnOutOfMemoryError >>> -Xss256k >>> -XX:StringTableSize=3D1000003 >>> -XX:+UseParNewGC >>> -XX:+UseConcMarkSweepGC >>> -XX:+CMSParallelRemarkEnabled >>> -XX:SurvivorRatio=3D8 >>> -XX:MaxTenuringThreshold=3D1 >>> -XX:CMSInitiatingOccupancyFraction=3D75 >>> -XX:+UseCMSInitiatingOccupancyOnly >>> -XX:+UseTLAB >>> -XX:CompileCommandFile=3D/etc/cassandra/hotspot_compiler >>> -XX:CMSWaitDuration=3D10000 >>> -XX:+CMSParallelInitialMarkEnabled >>> -XX:+CMSEdenChunksRecordAlways >>> -XX:CMSWaitDuration=3D10000 >>> -XX:+UseCondCardMark >>> -XX:+PrintGCDetails >>> -XX:+PrintGCDateStamps >>> -XX:+PrintHeapAtGC >>> -XX:+PrintTenuringDistribution >>> -XX:+PrintGCApplicationStoppedTime >>> -XX:+PrintPromotionFailure >>> -XX:PrintFLSStatistics=3D1 >>> -Xloggc:/var/log/cassandra/gc.log >>> -XX:+UseGCLogFileRotation >>> -XX:NumberOfGCLogFiles=3D10 >>> -XX:GCLogFileSize=3D10M >>> -Djava.net.preferIPv4Stack=3Dtrue >>> -Dcom.sun.management.jmxremote.port=3D7199 >>> -Dcom.sun.management.jmxremote.rmi.port=3D7199 >>> -Dcom.sun.management.jmxremote.ssl=3Dfalse >>> -Dcom.sun.management.jmxremote.authenticate=3Dfalse >>> -Djava.library.path=3D/usr/share/cassandra/lib/sigar-bin >>> -XX:+UnlockCommercialFeatures >>> -XX:+FlightRecorder >>> -Dlogback.configurationFile=3Dlogback.xml >>> -Dcassandra.logdir=3D/var/log/cassandra >>> -Dcassandra.storagedir=3D >>> -Dcassandra-pidfile=3D/var/run/cassandra/cassandra.pid >>> >>> >>> -- >>> >>> We=E2=80=99re hiring if you know of any awesome Java Devops or Linux Op= erations >>> Engineers! >>> >>> Founder/CEO Spinn3r.com >>> Location: *San Francisco, CA* >>> blog: http://burtonator.wordpress.com >>> =E2=80=A6 or check out my Google+ profile >>> >>> >>> >> >> >> -- >> >> We=E2=80=99re hiring if you know of any awesome Java Devops or Linux Ope= rations >> Engineers! >> >> Founder/CEO Spinn3r.com >> Location: *San Francisco, CA* >> blog: http://burtonator.wordpress.com >> =E2=80=A6 or check out my Google+ profile >> >> >> -- > =E2=80=94=E2=80=94=E2=80=94=E2=80=94=E2=80=94=E2=80=94=E2=80=94=E2=80=94 > Ben Slater > Chief Product Officer > Instaclustr: Cassandra + Spark - Managed | Consulting | Support > +61 437 929 798 > --001a113df53c6b85a805392899b6 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable
On a side node, do you monitor your disk I/O to see whethe= r the disk bandwidth can catch up with the huge spikes in write ? Use dstat= during the insert storm to see if you have big values for CPU wait

On Wed, Aug 3, 2016= at 12:41 PM, Ben Slater <ben.slater@instaclustr.com> wrote:
Yes, looks lik= e you have a (at least one) 100MB partition which is big enough to cause is= sues. When you do lots of writes to the large partition it is likely to end= up getting compacted (as per the log) and compactions often use a lot of m= emory / cause a lot of GC when they hit large partitions. This, in addition= to the write load is probably pushing you over the edge.=C2=A0

There are some improvements in 3.6 that might help (https:= //issues.apache.org/jira/browse/CASSANDRA-11206) but the 2.2 to 3.x upg= rade path seems risky at best at the moment. In any event, your best soluti= on would be to find a way to make your partitions smaller (like 1/10th of t= he size).

Cheers
Ben
<= /div>

On Wed, 3 Aug 2016 at 12:35 Kevin Burton <burton@spinn3r.com> wrot= e:
I have a theory= as to what I think is happening here.=C2=A0

There is a = correlation between the massive content all at once, and our outags.
<= div>
Our scheme uses large buckets of content where we write = to a bucket/partition for 5 minutes, then move to a new one.=C2=A0 This way= we can page through buckets.

I think what's h= appening is that CS is reading the entire partition into memory, then slici= ng through it... which would explain why its running out of memory.

system.log:WARN =C2=A0[CompactionExecutor:294] 2016-08-03= 02:01:55,659 BigTableWriter.java:184 - Writing large partition blogindex/c= ontent_legacy_2016_08_02:1470154500099 (106107128 bytes)

On Tue, Aug 2, 2016 = at 6:43 PM, Kevin Burton <burton@spinn3r.com> wrote:
We have a 60 node CS cluster r= unning 2.2.7 and about 20GB of RAM allocated to each C* node.=C2=A0 We'= re aware of the recommended 8GB limit to keep GCs low but our memory has be= en creeping up (probably) related to this bug.

Here'= s what we're seeing... if we do a low level of writes we think everythi= ng generally looks good.=C2=A0

What happens is tha= t we then need to catch up and then do a TON of writes all in a small time = window.=C2=A0 Then CS nodes start dropping like flies.=C2=A0 Some of them j= ust GC frequently and are able to recover. When they GC like this we see GC= pause in the 30 second range which then cause them to not gossip for a whi= le and they drop out of the cluster.

This happens = as a flurry around the cluster so we're not always able to catch which = ones are doing it as they recover. However, if we have 3 down, we mostly ha= ve a locked up cluster.=C2=A0 Writes don't complete and our app essenti= ally locks up.

SOME of the boxes never recover. I&= #39;m in this state now.=C2=A0 We have t3-5 nodes that are in GC storms whi= ch they won't recover from.

I reconfigured the= GC settings to enable jstat.

I was able to catch = it while it was happening:

^Croot@util0067 ~ = # sudo -u cassandra jstat -gcutil 4235 2500
=C2=A0 S0 =C2=A0 =C2= =A0 S1 =C2=A0 =C2=A0 E =C2=A0 =C2=A0 =C2=A0O =C2=A0 =C2=A0 =C2=A0M =C2=A0 = =C2=A0 CCS =C2=A0 =C2=A0YGC =C2=A0 =C2=A0 YGCT =C2=A0 =C2=A0FGC =C2=A0 =C2= =A0FGCT =C2=A0 =C2=A0 GCT =C2=A0=C2=A0
=C2=A0 0.00 100.00 100.00 = =C2=A094.76 =C2=A097.60 =C2=A093.06 =C2=A010435 1686.191 =C2=A0 471 1139.14= 2 2825.332
=C2=A0 0.00 100.00 100.00 =C2=A094.76 =C2=A097.60 =C2= =A093.06 =C2=A010435 1686.191 =C2=A0 471 1139.142 2825.332
=C2=A0= 0.00 100.00 100.00 =C2=A094.76 =C2=A097.60 =C2=A093.06 =C2=A010435 1686.19= 1 =C2=A0 471 1139.142 2825.332
=C2=A0 0.00 100.00 100.00 =C2=A094= .76 =C2=A097.60 =C2=A093.06 =C2=A010435 1686.191 =C2=A0 471 1139.142 2825.3= 32
=C2=A0 0.00 100.00 100.00 =C2=A094.76 =C2=A097.60 =C2=A093.06 = =C2=A010435 1686.191 =C2=A0 471 1139.142 2825.332
=C2=A0 0.00 100= .00 100.00 =C2=A094.76 =C2=A097.60 =C2=A093.06 =C2=A010435 1686.191 =C2=A0 = 471 1139.142 2825.332

... as you can see the= box is legitimately out of memory.=C2=A0 S0, S1, E and O are all completel= y full.

I'm not sure were to go from here.=C2= =A0 I think 20GB for our work load is more than reasonable.

<= /div>
90% of the time they're well below 10GB of RAM used.=C2=A0 Wh= ile I was watching this box I was seeing 30% RAM used until it decided to c= limb to 100%=C2=A0

Any advice on what do do n= ext... I don't see anything obvious in the logs to signal a problem.=C2= =A0

I attached all the command line arguments we u= se.=C2=A0 Note that I think that the cassandra-env.sh script puts them in t= here twice.

-ea
-javaagent:/usr/share/cassandra/lib/jamm-0.3.0.jar
-= XX:+CMSClassUnloadingEnabled
-XX:+UseThreadPriorities
-XX:ThreadPrior= ityPolicy=3D42
-Xms20000M
-Xmx20000M
-Xmn4096M
-XX:+HeapDumpOnO= utOfMemoryError
-Xss256k
-XX:StringTableSize=3D1000003
-XX:+UsePar= NewGC
-XX:+UseConcMarkSweepGC
-XX:+CMSParallelRemarkEnabled
-XX:Su= rvivorRatio=3D8
-XX:MaxTenuringThreshold=3D1
-XX:CMSInitiatingOccupan= cyFraction=3D75
-XX:+UseCMSInitiatingOccupancyOnly
-XX:+UseTLAB
-X= X:CompileCommandFile=3D/hotspot_compiler
-XX:CMSWaitDuration=3D10000
-XX:+CMSParallelInitialMarkEna= bled
-XX:+CMSEdenChunksRecordAlways
-XX:CMSWaitDuration=3D10000
-X= X:+UseCondCardMark
-XX:+PrintGCDetails
-XX:+PrintGCDateStamps
-XX:= +PrintHeapAtGC
-XX:+PrintTenuringDistribution
-XX:+PrintGCApplication= StoppedTime
-XX:+PrintPromotionFailure
-XX:PrintFLSStatistics=3D1
= -Xloggc:/var/log/cassandra/gc.log
-XX:+UseGCLogFileRotation
-XX:Numbe= rOfGCLogFiles=3D10
-XX:GCLogFileSize=3D10M
-Djava.net.preferIPv4Stack= =3Dtrue
-Dcom.sun.management.jmxremote.port=3D7199
-Dcom.sun.manageme= nt.jmxremote.rmi.port=3D7199
-Dcom.sun.management.jmxremote.ssl=3Dfalse<= br>-Dcom.sun.management.jmxremote.authenticate=3Dfalse
-Djava.library.pa= th=3D/usr/share/cassandra/lib/sigar-bin
-XX:+UnlockCommercialFeatures-XX:+FlightRecorder
-ea
-javaagent:/usr/share/cassandra/lib/jamm-0.3= .0.jar
-XX:+CMSClassUnloadingEnabled
-XX:+UseThreadPriorities
-XX:= ThreadPriorityPolicy=3D42
-Xms20000M
-Xmx20000M
-Xmn4096M
-XX:+= HeapDumpOnOutOfMemoryError
-Xss256k
-XX:StringTableSize=3D1000003
= -XX:+UseParNewGC
-XX:+UseConcMarkSweepGC
-XX:+CMSParallelRemarkEnable= d
-XX:SurvivorRatio=3D8
-XX:MaxTenuringThreshold=3D1
-XX:CMSInitia= tingOccupancyFraction=3D75
-XX:+UseCMSInitiatingOccupancyOnly
-XX:+Us= eTLAB
-XX:CompileCommandFile=3D/etc/cassandra/hotspot_compiler
-XX:CMSWaitDuration=3D10000
-XX:+= CMSParallelInitialMarkEnabled
-XX:+CMSEdenChunksRecordAlways
-XX:CMSW= aitDuration=3D10000
-XX:+UseCondCardMark
-XX:+PrintGCDetails
-XX:+= PrintGCDateStamps
-XX:+PrintHeapAtGC
-XX:+PrintTenuringDistribution-XX:+PrintGCApplicationStoppedTime
-XX:+PrintPromotionFailure
-XX:P= rintFLSStatistics=3D1
-Xloggc:/var/log/cassandra/gc.log
-XX:+UseGCLog= FileRotation
-XX:NumberOfGCLogFiles=3D10
-XX:GCLogFileSize=3D10M
-= Djava.net.preferIPv4Stack=3Dtrue
-Dcom.sun.management.jmxremote.port=3D7= 199
-Dcom.sun.management.jmxremote.rmi.port=3D7199
-Dcom.sun.manageme= nt.jmxremote.ssl=3Dfalse
-Dcom.sun.management.jmxremote.authenticate=3Df= alse
-Djava.library.path=3D/usr/share/cassandra/lib/sigar-bin
-XX:+Un= lockCommercialFeatures
-XX:+FlightRecorder
-Dlogback.configurationFil= e=3Dlogback.xml
-Dcassandra.logdir=3D/var/log/cassandra
-Dcassandra.s= toragedir=3D
-Dcassandra-pidfile=3D/var/run/cassandra/cassandra.pid =

--

We=E2=80= =99re hiring if you know of any awesome Java Devops or Linux Operations Eng= ineers!

Founder/CEO=C2=A0Spinn3r.com
Location:= =C2=A0San Francisco, CA
=E2=80=A6 or chec= k out my Google+ profile

=




--

We=E2= =80=99re hiring if you know of any awesome Java Devops or Linux Operations = Engineers!

Founder/CEO=C2=A0Spinn3r.com
Locatio= n:=C2=A0San Francisco, CA
=E2=80=A6 or chec= k out my Google+ profile

=

--
=E2=80=94=E2=80=94=E2=80=94=E2=80=94=E2=80=94=E2=80=94=E2=80= =94=E2=80=94
Ben Slater
Chief Product Officer
Instaclustr= : Cassandra + Spark - Managed | Consulting | Support

--001a113df53c6b85a805392899b6--