Return-Path: Delivered-To: apmail-cassandra-user-archive@www.apache.org Received: (qmail 15024 invoked from network); 4 May 2010 19:52:46 -0000 Received: from unknown (HELO mail.apache.org) (140.211.11.3) by 140.211.11.9 with SMTP; 4 May 2010 19:52:46 -0000 Received: (qmail 68100 invoked by uid 500); 4 May 2010 19:52:44 -0000 Delivered-To: apmail-cassandra-user-archive@cassandra.apache.org Received: (qmail 68044 invoked by uid 500); 4 May 2010 19:52:44 -0000 Mailing-List: contact user-help@cassandra.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@cassandra.apache.org Delivered-To: mailing list user@cassandra.apache.org Received: (qmail 68036 invoked by uid 99); 4 May 2010 19:52:44 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 04 May 2010 19:52:44 +0000 X-ASF-Spam-Status: No, hits=2.0 required=10.0 tests=AWL,FREEMAIL_FROM,HTML_FONT_FACE_BAD,HTML_MESSAGE,SPF_PASS,T_TO_NO_BRKTS_FREEMAIL X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of rantav@gmail.com designates 209.85.160.172 as permitted sender) Received: from [209.85.160.172] (HELO mail-gy0-f172.google.com) (209.85.160.172) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 04 May 2010 19:52:40 +0000 Received: by gyh4 with SMTP id 4so1909287gyh.31 for ; Tue, 04 May 2010 12:52:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:received:in-reply-to :references:date:message-id:subject:from:to:content-type; bh=b7Kxfio0rDv+H6pXrPesTWHmkjCNB+LV1giPI2/KGXE=; b=w/N+xlbYUwsNRC//j9rf51ZD6z7k9WpcFf0U5K4qA6hV8K4k+9zMrBZo23Zggd74Vx IXVcBjfYmaMhJQ5TNxO5W7JLKmUXPAPGVarjFp0cJJ/welfqroRBlkL3oYdljq7DALui 746Nf/j7ww33Jq4KqDNtjuvO9VMLNMnrBIjlc= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; b=LCV0QIG56oZLYDP/hhJnbWV/3LLXeBrictchsAbZ4FMSCjyYkaKUpqUtnApCO0oaZG hhFWlpM3wirBGFAUhboBVmXelOoeD10EkV5VtDMv4YyExn2S9G0IGXRltcv14QbG+vYi +ASn+i5RMM9vU5udWxzmy8nJaTZgKurkTNMUA= MIME-Version: 1.0 Received: by 10.231.191.138 with SMTP id dm10mr821979ibb.73.1273002730573; Tue, 04 May 2010 12:52:10 -0700 (PDT) Received: by 10.231.162.72 with HTTP; Tue, 4 May 2010 12:52:10 -0700 (PDT) In-Reply-To: References: Date: Tue, 4 May 2010 22:52:10 +0300 Message-ID: Subject: Re: performance tuning - where does the slowness come from? From: Ran Tavory To: user@cassandra.apache.org Content-Type: multipart/alternative; boundary=0016e64f90b08e74950485ca0bbf --0016e64f90b08e74950485ca0bbf Content-Type: text/plain; charset=UTF-8 it's a 64bit host. when I cancel mmap I see less memory used and zero swapping, but it's slowly growing so I'll have to wait and see. Performance isn't much better, not sure what's the bottleneck now (could also be the application). Now on the same host I see: top - 15:43:59 up 12 days, 4:23, 1 user, load average: 0.29, 0.68, 1.53 Tasks: 152 total, 1 running, 151 sleeping, 0 stopped, 0 zombie Cpu(s): 1.1%us, 0.5%sy, 0.0%ni, 97.8%id, 0.3%wa, 0.0%hi, 0.2%si, 0.0%st Mem: 8168376k total, 8120364k used, 48012k free, 2540k buffers Swap: 4194296k total, 12816k used, 4181480k free, 5028672k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ SWAP nFLT COMMAND 25122 cassandr 22 0 4943m 2.9g 9m S 12.6 36.7 35:39.53 2.0g 141 java $ vmstat 5 procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------ r b swpd free buff cache si so bi bo in cs us sy id wa st 1 0 12816 46656 2664 5021340 8 6 79 34 3 1 1 1 95 3 0 0 0 12816 48180 2672 5019460 0 0 282 9 1913 2450 2 1 97 0 0 0 0 12816 45064 2688 5020688 0 0 282 83 1850 2303 1 1 97 0 0 0 0 12816 47612 2696 5017520 0 0 102 59 1884 2328 1 1 98 0 0 On Tue, May 4, 2010 at 10:27 PM, Jonathan Ellis wrote: > Are you using 32 bit hosts? If not don't be scared of mmap using a > lot of address space, you have plenty. It won't make you swap more > than using buffered i/o. > > On Tue, May 4, 2010 at 1:57 PM, Ran Tavory wrote: > > I canceled mmap and indeed memory usage is sane again. So far performance > > hasn't been great, but I'll wait and see. > > I'm also interested in a way to cap mmap so I can take advantage of it > but > > not swap the host to death... > > > > On Tue, May 4, 2010 at 9:38 PM, Kyusik Chung > > wrote: > >> > >> This sounds just like the slowness I was asking about in another thread > - > >> after a lot of reads, the machine uses up all available memory on the > box > >> and then starts swapping. > >> My understanding was that mmap helps greatly with read and write perf > >> (until the box starts swapping I guess)...is there any way to use mmap > and > >> cap how much memory it takes up? > >> What do people use in production? mmap or no mmap? > >> Thanks! > >> Kyusik Chung > >> On May 4, 2010, at 10:11 AM, Schubert Zhang wrote: > >> > >> 1. When initially startup your nodes, please plan your InitialToken of > >> each node evenly. > >> 2. standard > >> > >> On Tue, May 4, 2010 at 9:09 PM, Boris Shulman > wrote: > >>> > >>> I think that the extra (more than 4GB) memory usage comes from the > >>> mmaped io, that is why it happens only for reads. > >>> > >>> On Tue, May 4, 2010 at 2:02 PM, Jordan Pittier < > jordan.pittier@gmail.com> > >>> wrote: > >>> > I'm facing the same issue with swap. It only occurs when I perform > read > >>> > operations (write are very fast :)). So I can't help you with the > >>> > memory > >>> > probleme. > >>> > > >>> > But to balance the load evenly between nodes in cluster just manually > >>> > fix > >>> > their token.(the "formula" is i * 2^127 / nb_nodes). > >>> > > >>> > Jordzn > >>> > > >>> > On Tue, May 4, 2010 at 8:20 AM, Ran Tavory wrote: > >>> >> > >>> >> I'm looking into performance issues on a 0.6.1 cluster. I see two > >>> >> symptoms: > >>> >> 1. Reads and writes are slow > >>> >> 2. One of the hosts is doing a lot of GC. > >>> >> 1 is slow in the sense that in normal state the cluster used to make > >>> >> around 3-5k read and writes per second (6-10k operations per > second), > >>> >> but > >>> >> how it's in the order of 200-400 ops per second, sometimes even > less. > >>> >> 2 looks like this: > >>> >> $ tail -f /outbrain/cassandra/log/system.log > >>> >> INFO [GC inspection] 2010-05-04 00:42:18,636 GCInspector.java (line > >>> >> 110) > >>> >> GC for ParNew: 672 ms, 166482384 reclaimed leaving 2872087208 used; > >>> >> max is > >>> >> 4432068608 > >>> >> INFO [GC inspection] 2010-05-04 00:42:28,638 GCInspector.java (line > >>> >> 110) > >>> >> GC for ParNew: 498 ms, 166493352 reclaimed leaving 2836049448 used; > >>> >> max is > >>> >> 4432068608 > >>> >> INFO [GC inspection] 2010-05-04 00:42:38,640 GCInspector.java (line > >>> >> 110) > >>> >> GC for ParNew: 327 ms, 166091528 reclaimed leaving 2796888424 used; > >>> >> max is > >>> >> 4432068608 > >>> >> ... and it goes on and on for hours, no stopping... > >>> >> The cluster is made of 6 hosts, 3 in one DC and 3 in another. > >>> >> Each host has 8G RAM. > >>> >> -Xmx=4G > >>> >> For some reason, the load isn't distributed evenly b/w the hosts, > >>> >> although > >>> >> I'm not sure this is the cause for slowness > >>> >> $ nodetool -h localhost -p 9004 ring > >>> >> Address Status Load Range > >>> >> Ring > >>> >> > >>> >> 144413773383729447702215082383444206680 > >>> >> 192.168.252.99Up 15.94 GB > >>> >> 66002764663998929243644931915471302076 |<--| > >>> >> 192.168.254.57Up 19.84 GB > >>> >> 81288739225600737067856268063987022738 | ^ > >>> >> 192.168.254.58Up 973.78 MB > >>> >> 86999744104066390588161689990810839743 v | > >>> >> 192.168.252.62Up 5.18 GB > >>> >> 88308919879653155454332084719458267849 | ^ > >>> >> 192.168.254.59Up 10.57 GB > >>> >> 142482163220375328195837946953175033937 v | > >>> >> 192.168.252.61Up 11.36 GB > >>> >> 144413773383729447702215082383444206680 |-->| > >>> >> The slow host is 192.168.252.61 and it isn't the most loaded one. > >>> >> The host is waiting a lot on IO and the load average is usually 6-7 > >>> >> $ w > >>> >> 00:42:56 up 11 days, 13:22, 1 user, load average: 6.21, 5.52, > 3.93 > >>> >> $ vmstat 5 > >>> >> procs -----------memory---------- ---swap-- -----io---- --system-- > >>> >> -----cpu------ > >>> >> r b swpd free buff cache si so bi bo in cs us > >>> >> sy id > >>> >> wa st > >>> >> 0 8 2147844 45744 1816 4457384 6 5 66 32 5 2 > 1 > >>> >> 1 > >>> >> 96 2 0 > >>> >> 0 8 2147164 49020 1808 4451596 385 0 2345 58 3372 9957 > 2 > >>> >> 2 > >>> >> 78 18 0 > >>> >> 0 3 2146432 45704 1812 4453956 342 0 2274 108 3937 10732 > >>> >> 2 2 > >>> >> 78 19 0 > >>> >> 0 1 2146252 44696 1804 4453436 345 164 1939 294 3647 7833 > 2 > >>> >> 2 > >>> >> 78 18 0 > >>> >> 0 1 2145960 46924 1744 4451260 158 0 2423 122 4354 14597 > >>> >> 2 2 > >>> >> 77 18 0 > >>> >> 7 1 2138344 44676 952 4504148 1722 403 1722 406 1388 439 > 87 > >>> >> 0 > >>> >> 10 2 0 > >>> >> 7 2 2137248 45652 956 4499436 1384 655 1384 658 1356 392 > 87 > >>> >> 0 > >>> >> 10 3 0 > >>> >> 7 1 2135976 46764 956 4495020 1366 718 1366 718 1395 380 > 87 > >>> >> 0 > >>> >> 9 4 0 > >>> >> 0 8 2134484 46964 956 4489420 1673 555 1814 586 1601 > 215590 > >>> >> 14 > >>> >> 2 68 16 0 > >>> >> 0 1 2135388 47444 972 4488516 785 833 2390 995 3812 8305 > 2 > >>> >> 2 > >>> >> 77 20 0 > >>> >> 0 10 2135164 45928 980 4488796 788 543 2275 626 36 > >>> >> So, the host is swapping like crazy... > >>> >> top shows that it's using a lot of memory. As noted before -Xmx=4G > and > >>> >> nothing else seems to be using a lot of memory on the host except > for > >>> >> the > >>> >> cassandra process, however, of the 8G ram on the host, 92% is used > by > >>> >> cassandra. How's that? > >>> >> Top shows there's 3.9g Shared and 7.2g Resident and 15.9g Virtual. > Why > >>> >> does it have 15g virtual? And why 7.2 RES? This can explain the > >>> >> slowness in > >>> >> swapping. > >>> >> $ top > >>> >> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND > >>> >> > >>> >> > >>> >> 20281 cassandr 25 0 15.9g 7.2g 3.9g S 33.3 92.6 175:30.27 java > >>> >> So, can the total memory be controlled? > >>> >> Or perhaps I'm looking in the wrong direction... > >>> >> I've looked at all the cassandra JMX counts and nothing seemed > >>> >> suspicious > >>> >> so far. By suspicious i mean a large number of pending tasks - there > >>> >> were > >>> >> always very small numbers in each pool. > >>> >> About read and write latencies, I'm not sure what the normal state > is, > >>> >> but > >>> >> here's an example of what I see on the problematic host: > >>> >> #mbean = org.apache.cassandra.service:type=StorageProxy: > >>> >> RecentReadLatencyMicros = 30105.888180684495; > >>> >> TotalReadLatencyMicros = 78543052801; > >>> >> TotalWriteLatencyMicros = 4213118609; > >>> >> RecentWriteLatencyMicros = 1444.4809201925639; > >>> >> ReadOperations = 4779553; > >>> >> RangeOperations = 0; > >>> >> TotalRangeLatencyMicros = 0; > >>> >> RecentRangeLatencyMicros = NaN; > >>> >> WriteOperations = 4740093; > >>> >> And the only pool that I do see some pending tasks is the > >>> >> ROW-READ-STAGE, > >>> >> but it doesn't look like much, usually around 6-8: > >>> >> #mbean = org.apache.cassandra.concurrent:type=ROW-READ-STAGE: > >>> >> ActiveCount = 8; > >>> >> PendingTasks = 8; > >>> >> CompletedTasks = 5427955; > >>> >> Any help finding the solution is appreciated, thanks... > >>> >> Below are a few more JMXes I collected from the system that may be > >>> >> interesting. > >>> >> #mbean = java.lang:type=Memory: > >>> >> Verbose = false; > >>> >> HeapMemoryUsage = { > >>> >> committed = 3767279616; > >>> >> init = 134217728; > >>> >> max = 4293656576; > >>> >> used = 1237105080; > >>> >> }; > >>> >> NonHeapMemoryUsage = { > >>> >> committed = 35061760; > >>> >> init = 24313856; > >>> >> max = 138412032; > >>> >> used = 23151320; > >>> >> }; > >>> >> ObjectPendingFinalizationCount = 0; > >>> >> #mbean = java.lang:name=ParNew,type=GarbageCollector: > >>> >> LastGcInfo = { > >>> >> GcThreadCount = 11; > >>> >> duration = 136; > >>> >> endTime = 42219272; > >>> >> id = 11719; > >>> >> memoryUsageAfterGc = { > >>> >> ( CMS Perm Gen ) = { > >>> >> key = CMS Perm Gen; > >>> >> value = { > >>> >> committed = 29229056; > >>> >> init = 21757952; > >>> >> max = 88080384; > >>> >> used = 17648848; > >>> >> }; > >>> >> }; > >>> >> ( Code Cache ) = { > >>> >> key = Code Cache; > >>> >> value = { > >>> >> committed = 5832704; > >>> >> init = 2555904; > >>> >> max = 50331648; > >>> >> used = 5563520; > >>> >> }; > >>> >> }; > >>> >> ( CMS Old Gen ) = { > >>> >> key = CMS Old Gen; > >>> >> value = { > >>> >> committed = 3594133504; > >>> >> init = 112459776; > >>> >> max = 4120510464; > >>> >> used = 964565720; > >>> >> }; > >>> >> }; > >>> >> ( Par Eden Space ) = { > >>> >> key = Par Eden Space; > >>> >> value = { > >>> >> committed = 171835392; > >>> >> init = 21495808; > >>> >> max = 171835392; > >>> >> used = 0; > >>> >> }; > >>> >> }; > >>> >> ( Par Survivor Space ) = { > >>> >> key = Par Survivor Space; > >>> >> value = { > >>> >> committed = 1310720; > >>> >> init = 131072; > >>> >> max = 1310720; > >>> >> used = 0; > >>> >> }; > >>> >> }; > >>> >> }; > >>> >> memoryUsageBeforeGc = { > >>> >> ( CMS Perm Gen ) = { > >>> >> key = CMS Perm Gen; > >>> >> value = { > >>> >> committed = 29229056; > >>> >> init = 21757952; > >>> >> max = 88080384; > >>> >> used = 17648848; > >>> >> }; > >>> >> }; > >>> >> ( Code Cache ) = { > >>> >> key = Code Cache; > >>> >> value = { > >>> >> committed = 5832704; > >>> >> init = 2555904; > >>> >> max = 50331648; > >>> >> used = 5563520; > >>> >> }; > >>> >> }; > >>> >> ( CMS Old Gen ) = { > >>> >> key = CMS Old Gen; > >>> >> value = { > >>> >> committed = 3594133504; > >>> >> init = 112459776; > >>> >> max = 4120510464; > >>> >> used = 959221872; > >>> >> }; > >>> >> }; > >>> >> ( Par Eden Space ) = { > >>> >> key = Par Eden Space; > >>> >> value = { > >>> >> committed = 171835392; > >>> >> init = 21495808; > >>> >> max = 171835392; > >>> >> used = 171835392; > >>> >> }; > >>> >> }; > >>> >> ( Par Survivor Space ) = { > >>> >> key = Par Survivor Space; > >>> >> value = { > >>> >> committed = 1310720; > >>> >> init = 131072; > >>> >> max = 1310720; > >>> >> used = 0; > >>> >> }; > >>> >> }; > >>> >> }; > >>> >> startTime = 42219136; > >>> >> }; > >>> >> CollectionCount = 11720; > >>> >> CollectionTime = 4561730; > >>> >> Name = ParNew; > >>> >> Valid = true; > >>> >> MemoryPoolNames = [ Par Eden Space, Par Survivor Space ]; > >>> >> #mbean = java.lang:type=OperatingSystem: > >>> >> MaxFileDescriptorCount = 63536; > >>> >> OpenFileDescriptorCount = 75; > >>> >> CommittedVirtualMemorySize = 17787711488; > >>> >> FreePhysicalMemorySize = 45522944; > >>> >> FreeSwapSpaceSize = 2123968512; > >>> >> ProcessCpuTime = 12251460000000; > >>> >> TotalPhysicalMemorySize = 8364417024; > >>> >> TotalSwapSpaceSize = 4294959104; > >>> >> Name = Linux; > >>> >> AvailableProcessors = 8; > >>> >> Arch = amd64; > >>> >> SystemLoadAverage = 4.36; > >>> >> Version = 2.6.18-164.15.1.el5; > >>> >> #mbean = java.lang:type=Runtime: > >>> >> Name = 20281@ob1061.nydc1.outbrain.com; > >>> >> > >>> >> ClassPath = > >>> >> > >>> >> > /outbrain/cassandra/apache-cassandra-0.6.1/bin/../conf:/outbrain/cassandra/apache-cassandra-0.6.1/bin/../build/classes:/outbrain/cassandra/apache-cassandra-0.6.1/bin/.. > >>> >> > >>> >> > >>> >> > /lib/antlr-3.1.3.jar:/outbrain/cassandra/apache-cassandra-0.6.1/bin/../lib/apache-cassandra-0.6.1.jar:/outbrain/cassandra/apache-cassandra-0.6.1/bin/../lib/avro-1.2.0-dev.jar:/outb > >>> >> > >>> >> > >>> >> > rain/cassandra/apache-cassandra-0.6.1/bin/../lib/clhm-production.jar:/outbrain/cassandra/apache-cassandra-0.6.1/bin/../lib/commons-cli-1.1.jar:/outbrain/cassandra/apache-cassandra- > >>> >> > >>> >> > >>> >> > 0.6.1/bin/../lib/commons-codec-1.2.jar:/outbrain/cassandra/apache-cassandra-0.6.1/bin/../lib/commons-collections-3.2.1.jar:/outbrain/cassandra/apache-cassandra-0.6.1/bin/../lib/com > >>> >> > >>> >> > >>> >> > mons-lang-2.4.jar:/outbrain/cassandra/apache-cassandra-0.6.1/bin/../lib/google-collections-1.0.jar:/outbrain/cassandra/apache-cassandra-0.6.1/bin/../lib/hadoop-core-0.20.1.jar:/out > >>> >> > >>> >> > >>> >> > brain/cassandra/apache-cassandra-0.6.1/bin/../lib/high-scale-lib.jar:/outbrain/cassandra/apache-cassandra-0.6.1/bin/../lib/ivy-2.1.0.jar:/outbrain/cassandra/apache-cassandra-0.6.1/ > >>> >> > >>> >> > >>> >> > bin/../lib/jackson-core-asl-1.4.0.jar:/outbrain/cassandra/apache-cassandra-0.6.1/bin/../lib/jackson-mapper-asl-1.4.0.jar:/outbrain/cassandra/apache-cassandra-0.6.1/bin/../lib/jline > >>> >> > >>> >> > >>> >> > -0.9.94.jar:/outbrain/cassandra/apache-cassandra-0.6.1/bin/../lib/json-simple-1.1.jar:/outbrain/cassandra/apache-cassandra-0.6.1/bin/../lib/libthrift-r917130.jar:/outbrain/cassandr > >>> >> > >>> >> > >>> >> > a/apache-cassandra-0.6.1/bin/../lib/log4j-1.2.14.jar:/outbrain/cassandra/apache-cassandra-0.6.1/bin/../lib/slf4j-api-1.5.8.jar:/outbrain/cassandra/apache-cassandra-0.6.1/bin/../lib > >>> >> /slf4j-log4j12-1.5.8.jar; > >>> >> > >>> >> BootClassPath = > >>> >> > >>> >> > /usr/java/jdk1.6.0_17/jre/lib/alt-rt.jar:/usr/java/jdk1.6.0_17/jre/lib/resources.jar:/usr/java/jdk1.6.0_17/jre/lib/rt.jar:/usr/java/jdk1.6.0_17/jre/lib/sunrsasign.j > >>> >> > >>> >> > >>> >> > ar:/usr/java/jdk1.6.0_17/jre/lib/jsse.jar:/usr/java/jdk1.6.0_17/jre/lib/jce.jar:/usr/java/jdk1.6.0_17/jre/lib/charsets.jar:/usr/java/jdk1.6.0_17/jre/classes; > >>> >> > >>> >> LibraryPath = > >>> >> > >>> >> > /usr/java/jdk1.6.0_17/jre/lib/amd64/server:/usr/java/jdk1.6.0_17/jre/lib/amd64:/usr/java/jdk1.6.0_17/jre/../lib/amd64:/usr/java/packages/lib/amd64:/lib:/usr/lib; > >>> >> > >>> >> VmName = Java HotSpot(TM) 64-Bit Server VM; > >>> >> > >>> >> VmVendor = Sun Microsystems Inc.; > >>> >> > >>> >> VmVersion = 14.3-b01; > >>> >> > >>> >> BootClassPathSupported = true; > >>> >> > >>> >> InputArguments = [ -ea, -Xms128M, -Xmx4G, > -XX:TargetSurvivorRatio=90, > >>> >> -XX:+AggressiveOpts, -XX:+UseParNewGC, -XX:+UseConcMarkSweepGC, > >>> >> -XX:+CMSParallelRemarkEnabled, -XX:+HeapDumpOnOutOfMemoryError, > >>> >> -XX:SurvivorRatio=128, -XX:MaxTenuringThreshold=0, > >>> >> -Dcom.sun.management.jmxremote.port=9004, > >>> >> -Dcom.sun.management.jmxremote.ssl=false, > >>> >> -Dcom.sun.management.jmxremote.authenticate=false, > >>> >> > >>> >> > -Dstorage-config=/outbrain/cassandra/apache-cassandra-0.6.1/bin/../conf, > >>> >> -Dcassandra-pidfile=/var/run/cassandra.pid ]; > >>> >> > >>> >> ManagementSpecVersion = 1.2; > >>> >> > >>> >> SpecName = Java Virtual Machine Specification; > >>> >> > >>> >> SpecVendor = Sun Microsystems Inc.; > >>> >> > >>> >> SpecVersion = 1.0; > >>> >> > >>> >> StartTime = 1272911001415; > >>> >> ... > >>> > > >> > >> > > > > > > > > -- > Jonathan Ellis > Project Chair, Apache Cassandra > co-founder of Riptano, the source for professional Cassandra support > http://riptano.com > --0016e64f90b08e74950485ca0bbf Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable
it's a 64bit host.
when I cancel mmap I see less m= emory used and zero swapping, but it's slowly growing so I'll have = to wait and see.=C2=A0
Performance isn't much better, not sur= e what's the bottleneck now (could also be the application).

Now on the same host I see:
top - 15:43= :59 up 12 days, =C2=A04:23, =C2=A01 user, =C2=A0load average: 0.29, 0.68, 1= .53
Tasks: 152 total, =C2=A0 1 running, 151 sleeping, =C2=A0 0 stopped, = =C2=A0 0 zombie
Cpu(s): =C2=A01.1%us, =C2=A00.5%sy, =C2=A0= 0.0%ni, 97.8%id, =C2=A00.3%wa, =C2=A00.0%hi, =C2=A00.2%si, =C2=A00.0%st
Mem: =C2=A0 8168376k total, =C2=A08120364k used, =C2=A0 =C2=A048012k f= ree, =C2=A0 =C2=A0 2540k buffers
Swap: =C2=A04194296k tota= l, =C2=A0 =C2=A012816k used, =C2=A04181480k free, =C2=A05028672k cached

=C2=A0=C2=A0PID USER =C2=A0 =C2=A0 =C2=A0PR =C2= =A0NI =C2=A0VIRT =C2=A0RES =C2=A0SHR S %CPU %MEM =C2=A0 =C2=A0TIME+ =C2=A0S= WAP nFLT COMMAND =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0
25122 cassandr =C2=A022 =C2=A0 0 4943m 2.9g =C2=A0 9m S 12.6 36.7 =C2= =A035:39.53 2.0g =C2=A0141 java =C2=A0

$ vmstat 5
procs -----------memory---------- ---swap-- -----io---- --system-- ---= --cpu------
=C2=A0r =C2=A0b =C2=A0 swpd =C2=A0 free =C2=A0= buff =C2=A0cache =C2=A0 si =C2=A0 so =C2=A0 =C2=A0bi =C2=A0 =C2=A0bo =C2= =A0 in =C2=A0 cs us sy id wa st
=C2=A01 =C2=A00 =C2=A012816 =C2=A046656 =C2=A0 2664 5021340 =C2=A0 =C2= =A08 =C2=A0 =C2=A06 =C2=A0 =C2=A079 =C2=A0 =C2=A034 =C2=A0 =C2=A03 =C2=A0 = =C2=A01 =C2=A01 =C2=A01 95 =C2=A03 =C2=A00
=C2=A00 =C2=A00= =C2=A012816 =C2=A048180 =C2=A0 2672 5019460 =C2=A0 =C2=A00 =C2=A0 =C2=A00 = =C2=A0 282 =C2=A0 =C2=A0 9 1913 2450 =C2=A02 =C2=A01 97 =C2=A00 =C2=A00
=C2=A00 =C2=A00 =C2=A012816 =C2=A045064 =C2=A0 2688 5020688 =C2=A0 =C2= =A00 =C2=A0 =C2=A00 =C2=A0 282 =C2=A0 =C2=A083 1850 2303 =C2=A01 =C2=A01 97= =C2=A00 =C2=A00
=C2=A00 =C2=A00 =C2=A012816 =C2=A047612 = =C2=A0 2696 5017520 =C2=A0 =C2=A00 =C2=A0 =C2=A00 =C2=A0 102 =C2=A0 =C2=A05= 9 1884 2328 =C2=A01 =C2=A01 98 =C2=A00 =C2=A00


On Tue, May 4, 2= 010 at 10:27 PM, Jonathan Ellis <jbellis@gmail.com> wrote:
Are you using 32 bit hosts? =C2=A0If not do= n't be scared of mmap using a
lot of address space, you have plenty. =C2=A0It won't make you swap mor= e
than using buffered i/o.

On Tue, May 4, 2010 at 1:57 PM, Ran Tavory <rantav@gmail.com> wrote:
> I canceled mmap and indeed memory usage is sane again. So far performa= nce
> hasn't been great, but I'll wait and see.
> I'm also interested in a way to cap mmap so I can take advantage o= f it but
> not swap the host to death...
>
> On Tue, May 4, 2010 at 9:38 PM, Kyusik Chung <kyusik@discovereads.com>
> wrote:
>>
>> This sounds just like the slowness I was asking about in another t= hread -
>> after a lot of reads, the machine uses up all available memory on = the box
>> and then starts swapping.
>> My understanding was that mmap helps greatly with read and write p= erf
>> (until the box starts swapping I guess)...is there any way to use = mmap and
>> cap how much memory it takes up?
>> What do people use in production? =C2=A0mmap or no mmap?
>> Thanks!
>> Kyusik Chung
>> On May 4, 2010, at 10:11 AM, Schubert Zhang wrote:
>>
>> 1. When initially startup your nodes, please plan your InitialToke= n of
>> each node evenly.
>> 2. <DiskAccessMode>standard</DiskAccessMode>
>>
>> On Tue, May 4, 2010 at 9:09 PM, Boris Shulman <shulmanb@gmail.com> wrote:
>>>
>>> I think that the extra (more than 4GB) memory usage comes from= the
>>> mmaped io, that is why it happens only for reads.
>>>
>>> On Tue, May 4, 2010 at 2:02 PM, Jordan Pittier <jordan.pittier@gmail.com>
>>> wrote:
>>> > I'm facing the same issue with swap. It only occurs w= hen I perform read
>>> > operations (write are very fast :)). So I can't help = you with the
>>> > memory
>>> > probleme.
>>> >
>>> > But to balance the load evenly between nodes in cluster j= ust manually
>>> > fix
>>> > their token.(the "formula" is i * 2^127 / nb_no= des).
>>> >
>>> > Jordzn
>>> >
>>> > On Tue, May 4, 2010 at 8:20 AM, Ran Tavory <rantav@gmail.com> wrote:
>>> >>
>>> >> I'm looking into performance issues on a 0.6.1 cl= uster. I see two
>>> >> symptoms:
>>> >> 1. Reads and writes are slow
>>> >> 2. One of the hosts is doing a lot of GC.
>>> >> 1 is slow in the sense that in normal state the clust= er used to make
>>> >> around 3-5k read and writes per second (6-10k operati= ons per second),
>>> >> but
>>> >> how it's in the order of 200-400 ops per second, = sometimes even less.
>>> >> 2 looks like this:
>>> >> $ tail -f /outbrain/cassandra/log/system.log
>>> >> =C2=A0INFO [GC inspection] 2010-05-04 00:42:18,636 GC= Inspector.java (line
>>> >> 110)
>>> >> GC for ParNew: 672 ms, 166482384 reclaimed leaving 28= 72087208 used;
>>> >> max is
>>> >> 4432068608
>>> >> =C2=A0INFO [GC inspection] 2010-05-04 00:42:28,638 GC= Inspector.java (line
>>> >> 110)
>>> >> GC for ParNew: 498 ms, 166493352 reclaimed leaving 28= 36049448 used;
>>> >> max is
>>> >> 4432068608
>>> >> =C2=A0INFO [GC inspection] 2010-05-04 00:42:38,640 GC= Inspector.java (line
>>> >> 110)
>>> >> GC for ParNew: 327 ms, 166091528 reclaimed leaving 27= 96888424 used;
>>> >> max is
>>> >> 4432068608
>>> >> ... and it goes on and on for hours, no stopping... >>> >> The cluster is made of 6 hosts, 3 in one DC and 3 in = another.
>>> >> Each host has 8G RAM.
>>> >> -Xmx=3D4G
>>> >> For some reason, the load isn't distributed evenl= y b/w the hosts,
>>> >> although
>>> >> I'm not sure this is the cause for slowness
>>> >> $ nodetool -h localhost -p 9004 ring
>>> >> Address =C2=A0 =C2=A0 =C2=A0 Status =C2=A0 =C2=A0 Loa= d =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0Range
>>> >> =C2=A0 =C2=A0 =C2=A0 =C2=A0Ring
>>> >>
>>> >> 144413773383729447702215082383444206680
>>> >> 192.168.252.99Up =C2=A0 =C2=A0 =C2=A0 =C2=A0 15.94 GB=
>>> >> =C2=A066002764663998929243644931915471302076 =C2=A0 = =C2=A0 |<--|
>>> >> 192.168.254.57Up =C2=A0 =C2=A0 =C2=A0 =C2=A0 19.84 GB=
>>> >> =C2=A081288739225600737067856268063987022738 =C2=A0 = =C2=A0 | =C2=A0 ^
>>> >> 192.168.254.58Up =C2=A0 =C2=A0 =C2=A0 =C2=A0 973.78 M= B
>>> >> 86999744104066390588161689990810839743 =C2=A0 =C2=A0 = v =C2=A0 |
>>> >> 192.168.252.62Up =C2=A0 =C2=A0 =C2=A0 =C2=A0 5.18 GB<= br> >>> >> 88308919879653155454332084719458267849 =C2=A0 =C2=A0 = | =C2=A0 ^
>>> >> 192.168.254.59Up =C2=A0 =C2=A0 =C2=A0 =C2=A0 10.57 GB=
>>> >> =C2=A0142482163220375328195837946953175033937 =C2=A0 = =C2=A0v =C2=A0 |
>>> >> 192.168.252.61Up =C2=A0 =C2=A0 =C2=A0 =C2=A0 11.36 GB=
>>> >> =C2=A0144413773383729447702215082383444206680 =C2=A0 = =C2=A0|-->|
>>> >> The slow host is=C2=A0192.168.252.61 and it isn't= the most loaded one.
>>> >> The host is waiting a lot on IO and the load average = is usually 6-7
>>> >> $ w
>>> >> =C2=A000:42:56 up 11 days, 13:22, =C2=A01 user, =C2= =A0load average: 6.21, 5.52, 3.93
>>> >> $ vmstat 5
>>> >> procs -----------memory---------- ---swap-- -----io--= -- --system--
>>> >> -----cpu------
>>> >> =C2=A0r =C2=A0b =C2=A0 swpd =C2=A0 free =C2=A0 buff = =C2=A0cache =C2=A0=C2=A0si=C2=A0=C2=A0=C2=A0so=C2=A0=C2=A0 =C2=A0bi =C2=A0 = =C2=A0bo =C2=A0 in =C2=A0 cs us
>>> >> sy id
>>> >> wa st
>>> >> =C2=A00 =C2=A08 2147844 =C2=A045744 =C2=A0 1816 44573= 84 =C2=A0 =C2=A06 =C2=A0 =C2=A05 =C2=A0 =C2=A066 =C2=A0 =C2=A032 =C2=A0 =C2= =A05 =C2=A0 =C2=A02 =C2=A01
>>> >> =C2=A01
>>> >> 96 =C2=A02 =C2=A00
>>> >> =C2=A00 =C2=A08 2147164 =C2=A049020 =C2=A0 1808 44515= 96 =C2=A0385=C2=A0=C2=A0 =C2=A00 =C2=A02345 =C2=A0 =C2=A058 3372 9957 =C2= =A02
>>> >> =C2=A02
>>> >> 78 18 =C2=A00
>>> >> =C2=A00 =C2=A03 2146432 =C2=A045704 =C2=A0 1812 44539= 56 =C2=A0342=C2=A0=C2=A0 =C2=A00 =C2=A02274 =C2=A0 108 3937 10732
>>> >> =C2=A02 =C2=A02
>>> >> 78 19 =C2=A00
>>> >> =C2=A00 =C2=A01 2146252 =C2=A044696 =C2=A0 1804 44534= 36 =C2=A0345=C2=A0=C2=A0164=C2=A0=C2=A01939 =C2=A0 294 3647 7833 =C2=A02 >>> >> =C2=A02
>>> >> 78 18 =C2=A00
>>> >> =C2=A00 =C2=A01 2145960 =C2=A046924 =C2=A0 1744 44512= 60 =C2=A0158=C2=A0=C2=A0 =C2=A00 =C2=A02423 =C2=A0 122 4354 14597
>>> >> =C2=A02 =C2=A02
>>> >> 77 18 =C2=A00
>>> >> =C2=A07 =C2=A01 2138344 =C2=A044676 =C2=A0 =C2=A0952 = 4504148=C2=A01722=C2=A0=C2=A0403=C2=A0=C2=A01722 =C2=A0 406 1388 =C2=A0439 = 87
>>> >> =C2=A00
>>> >> 10 =C2=A02 =C2=A00
>>> >> =C2=A07 =C2=A02 2137248 =C2=A045652 =C2=A0 =C2=A0956 = 4499436=C2=A01384=C2=A0=C2=A0655=C2=A0=C2=A01384 =C2=A0 658 1356 =C2=A0392 = 87
>>> >> =C2=A00
>>> >> 10 =C2=A03 =C2=A00
>>> >> =C2=A07 =C2=A01 2135976 =C2=A046764 =C2=A0 =C2=A0956 = 4495020=C2=A01366=C2=A0=C2=A0718=C2=A0=C2=A01366 =C2=A0 718 1395 =C2=A0380 = 87
>>> >> =C2=A00
>>> >> =C2=A09 =C2=A04 =C2=A00
>>> >> =C2=A00 =C2=A08 2134484 =C2=A046964 =C2=A0 =C2=A0956 = 4489420=C2=A01673=C2=A0=C2=A0555=C2=A0=C2=A01814 =C2=A0 586 1601 215590
>>> >> 14
>>> >> =C2=A02 68 16 =C2=A00
>>> >> =C2=A00 =C2=A01 2135388 =C2=A047444 =C2=A0 =C2=A0972 = 4488516 =C2=A0785=C2=A0=C2=A0833=C2=A0=C2=A02390 =C2=A0 995 3812 8305 =C2= =A02
>>> >> =C2=A02
>>> >> 77 20 =C2=A00
>>> >> =C2=A00 10 2135164 =C2=A045928 =C2=A0 =C2=A0980 44887= 96 =C2=A0788=C2=A0=C2=A0543=C2=A0=C2=A02275 =C2=A0 626 36
>>> >> So, the host is swapping like crazy...
>>> >> top shows that it's using a lot of memory. As not= ed before -Xmx=3D4G and
>>> >> nothing else seems to be using a lot of memory on the= host except for
>>> >> the
>>> >> cassandra process, however, of the 8G ram on the host= , 92% is used by
>>> >> cassandra. How's that?
>>> >> Top shows there's 3.9g Shared and 7.2g Resident a= nd 15.9g Virtual. Why
>>> >> does it have 15g virtual? And why 7.2 RES? This can e= xplain the
>>> >> slowness in
>>> >> swapping.
>>> >> $ top
>>> >> =C2=A0=C2=A0PID USER =C2=A0 =C2=A0 =C2=A0PR =C2=A0NI = =C2=A0VIRT =C2=A0RES =C2=A0SHR S %CPU %MEM =C2=A0 =C2=A0TIME+ =C2=A0COMMAND=
>>> >>
>>> >>
>>> >> 20281 cassandr =C2=A025 =C2=A0 0=C2=A015.9g=C2=A07.2g= 3.9g S 33.3 92.6 175:30.27 java
>>> >> So, can the total memory be controlled?
>>> >> Or perhaps I'm looking in the wrong direction...<= br> >>> >> I've looked at all the cassandra JMX counts and n= othing seemed
>>> >> suspicious
>>> >> so far. By suspicious i mean a large number of pendin= g tasks - there
>>> >> were
>>> >> always very small=C2=A0numbers=C2=A0in each pool.
>>> >> About read and write latencies, I'm not sure what= the normal state is,
>>> >> but
>>> >> here's an example of what I see on the problemati= c host:
>>> >> #mbean =3D org.apache.cassandra.service:type=3DStorag= eProxy:
>>> >> RecentReadLatencyMicros =3D 30105.888180684495;
>>> >> TotalReadLatencyMicros =3D 78543052801;
>>> >> TotalWriteLatencyMicros =3D 4213118609;
>>> >> RecentWriteLatencyMicros =3D 1444.4809201925639;
>>> >> ReadOperations =3D 4779553;
>>> >> RangeOperations =3D 0;
>>> >> TotalRangeLatencyMicros =3D 0;
>>> >> RecentRangeLatencyMicros =3D NaN;
>>> >> WriteOperations =3D 4740093;
>>> >> And the only pool that I do see some pending tasks is= the
>>> >> ROW-READ-STAGE,
>>> >> but it doesn't look like much, usually around 6-8= :
>>> >> #mbean =3D org.apache.cassandra.concurrent:type=3DROW= -READ-STAGE:
>>> >> ActiveCount =3D 8;
>>> >> PendingTasks =3D 8;
>>> >> CompletedTasks =3D 5427955;
>>> >> Any help finding the solution is appreciated, thanks.= ..
>>> >> Below are a few more JMXes I collected from the syste= m that may be
>>> >> interesting.
>>> >> #mbean =3D java.lang:type=3DMemory:
>>> >> Verbose =3D false;
>>> >> HeapMemoryUsage =3D {
>>> >> =C2=A0=C2=A0committed =3D 3767279616;
>>> >> =C2=A0=C2=A0init =3D 134217728;
>>> >> =C2=A0=C2=A0max =3D 4293656576;
>>> >> =C2=A0=C2=A0used =3D 1237105080;
>>> >> =C2=A0};
>>> >> NonHeapMemoryUsage =3D {
>>> >> =C2=A0=C2=A0committed =3D 35061760;
>>> >> =C2=A0=C2=A0init =3D 24313856;
>>> >> =C2=A0=C2=A0max =3D 138412032;
>>> >> =C2=A0=C2=A0used =3D 23151320;
>>> >> =C2=A0};
>>> >> ObjectPendingFinalizationCount =3D 0;
>>> >> #mbean =3D java.lang:name=3DParNew,type=3DGarbageColl= ector:
>>> >> LastGcInfo =3D {
>>> >> =C2=A0=C2=A0GcThreadCount =3D 11;
>>> >> =C2=A0=C2=A0duration =3D 136;
>>> >> =C2=A0=C2=A0endTime =3D 42219272;
>>> >> =C2=A0=C2=A0id =3D 11719;
>>> >> =C2=A0=C2=A0memoryUsageAfterGc =3D {
>>> >> =C2=A0=C2=A0 =C2=A0( CMS Perm Gen ) =3D {
>>> >> =C2=A0=C2=A0 =C2=A0 =C2=A0key =3D CMS Perm Gen;
>>> >> =C2=A0=C2=A0 =C2=A0 =C2=A0value =3D {
>>> >> =C2=A0=C2=A0 =C2=A0 =C2=A0 =C2=A0committed =3D 292290= 56;
>>> >> =C2=A0=C2=A0 =C2=A0 =C2=A0 =C2=A0init =3D 21757952; >>> >> =C2=A0=C2=A0 =C2=A0 =C2=A0 =C2=A0max =3D 88080384; >>> >> =C2=A0=C2=A0 =C2=A0 =C2=A0 =C2=A0used =3D 17648848; >>> >> =C2=A0=C2=A0 =C2=A0 =C2=A0 };
>>> >> =C2=A0=C2=A0 =C2=A0 };
>>> >> =C2=A0=C2=A0 =C2=A0( Code Cache ) =3D {
>>> >> =C2=A0=C2=A0 =C2=A0 =C2=A0key =3D Code Cache;
>>> >> =C2=A0=C2=A0 =C2=A0 =C2=A0value =3D {
>>> >> =C2=A0=C2=A0 =C2=A0 =C2=A0 =C2=A0committed =3D 583270= 4;
>>> >> =C2=A0=C2=A0 =C2=A0 =C2=A0 =C2=A0init =3D 2555904; >>> >> =C2=A0=C2=A0 =C2=A0 =C2=A0 =C2=A0max =3D 50331648; >>> >> =C2=A0=C2=A0 =C2=A0 =C2=A0 =C2=A0used =3D 5563520; >>> >> =C2=A0=C2=A0 =C2=A0 =C2=A0 };
>>> >> =C2=A0=C2=A0 =C2=A0 };
>>> >> =C2=A0=C2=A0 =C2=A0( CMS Old Gen ) =3D {
>>> >> =C2=A0=C2=A0 =C2=A0 =C2=A0key =3D CMS Old Gen;
>>> >> =C2=A0=C2=A0 =C2=A0 =C2=A0value =3D {
>>> >> =C2=A0=C2=A0 =C2=A0 =C2=A0 =C2=A0committed =3D 359413= 3504;
>>> >> =C2=A0=C2=A0 =C2=A0 =C2=A0 =C2=A0init =3D 112459776;<= br> >>> >> =C2=A0=C2=A0 =C2=A0 =C2=A0 =C2=A0max =3D 4120510464;<= br> >>> >> =C2=A0=C2=A0 =C2=A0 =C2=A0 =C2=A0used =3D 964565720;<= br> >>> >> =C2=A0=C2=A0 =C2=A0 =C2=A0 };
>>> >> =C2=A0=C2=A0 =C2=A0 };
>>> >> =C2=A0=C2=A0 =C2=A0( Par Eden Space ) =3D {
>>> >> =C2=A0=C2=A0 =C2=A0 =C2=A0key =3D Par Eden Space;
>>> >> =C2=A0=C2=A0 =C2=A0 =C2=A0value =3D {
>>> >> =C2=A0=C2=A0 =C2=A0 =C2=A0 =C2=A0committed =3D 171835= 392;
>>> >> =C2=A0=C2=A0 =C2=A0 =C2=A0 =C2=A0init =3D 21495808; >>> >> =C2=A0=C2=A0 =C2=A0 =C2=A0 =C2=A0max =3D 171835392; >>> >> =C2=A0=C2=A0 =C2=A0 =C2=A0 =C2=A0used =3D 0;
>>> >> =C2=A0=C2=A0 =C2=A0 =C2=A0 };
>>> >> =C2=A0=C2=A0 =C2=A0 };
>>> >> =C2=A0=C2=A0 =C2=A0( Par Survivor Space ) =3D {
>>> >> =C2=A0=C2=A0 =C2=A0 =C2=A0key =3D Par Survivor Space;=
>>> >> =C2=A0=C2=A0 =C2=A0 =C2=A0value =3D {
>>> >> =C2=A0=C2=A0 =C2=A0 =C2=A0 =C2=A0committed =3D 131072= 0;
>>> >> =C2=A0=C2=A0 =C2=A0 =C2=A0 =C2=A0init =3D 131072;
>>> >> =C2=A0=C2=A0 =C2=A0 =C2=A0 =C2=A0max =3D 1310720;
>>> >> =C2=A0=C2=A0 =C2=A0 =C2=A0 =C2=A0used =3D 0;
>>> >> =C2=A0=C2=A0 =C2=A0 =C2=A0 };
>>> >> =C2=A0=C2=A0 =C2=A0 };
>>> >> =C2=A0=C2=A0 };
>>> >> =C2=A0=C2=A0memoryUsageBeforeGc =3D {
>>> >> =C2=A0=C2=A0 =C2=A0( CMS Perm Gen ) =3D {
>>> >> =C2=A0=C2=A0 =C2=A0 =C2=A0key =3D CMS Perm Gen;
>>> >> =C2=A0=C2=A0 =C2=A0 =C2=A0value =3D {
>>> >> =C2=A0=C2=A0 =C2=A0 =C2=A0 =C2=A0committed =3D 292290= 56;
>>> >> =C2=A0=C2=A0 =C2=A0 =C2=A0 =C2=A0init =3D 21757952; >>> >> =C2=A0=C2=A0 =C2=A0 =C2=A0 =C2=A0max =3D 88080384; >>> >> =C2=A0=C2=A0 =C2=A0 =C2=A0 =C2=A0used =3D 17648848; >>> >> =C2=A0=C2=A0 =C2=A0 =C2=A0 };
>>> >> =C2=A0=C2=A0 =C2=A0 };
>>> >> =C2=A0=C2=A0 =C2=A0( Code Cache ) =3D {
>>> >> =C2=A0=C2=A0 =C2=A0 =C2=A0key =3D Code Cache;
>>> >> =C2=A0=C2=A0 =C2=A0 =C2=A0value =3D {
>>> >> =C2=A0=C2=A0 =C2=A0 =C2=A0 =C2=A0committed =3D 583270= 4;
>>> >> =C2=A0=C2=A0 =C2=A0 =C2=A0 =C2=A0init =3D 2555904; >>> >> =C2=A0=C2=A0 =C2=A0 =C2=A0 =C2=A0max =3D 50331648; >>> >> =C2=A0=C2=A0 =C2=A0 =C2=A0 =C2=A0used =3D 5563520; >>> >> =C2=A0=C2=A0 =C2=A0 =C2=A0 };
>>> >> =C2=A0=C2=A0 =C2=A0 };
>>> >> =C2=A0=C2=A0 =C2=A0( CMS Old Gen ) =3D {
>>> >> =C2=A0=C2=A0 =C2=A0 =C2=A0key =3D CMS Old Gen;
>>> >> =C2=A0=C2=A0 =C2=A0 =C2=A0value =3D {
>>> >> =C2=A0=C2=A0 =C2=A0 =C2=A0 =C2=A0committed =3D 359413= 3504;
>>> >> =C2=A0=C2=A0 =C2=A0 =C2=A0 =C2=A0init =3D 112459776;<= br> >>> >> =C2=A0=C2=A0 =C2=A0 =C2=A0 =C2=A0max =3D 4120510464;<= br> >>> >> =C2=A0=C2=A0 =C2=A0 =C2=A0 =C2=A0used =3D 959221872;<= br> >>> >> =C2=A0=C2=A0 =C2=A0 =C2=A0 };
>>> >> =C2=A0=C2=A0 =C2=A0 };
>>> >> =C2=A0=C2=A0 =C2=A0( Par Eden Space ) =3D {
>>> >> =C2=A0=C2=A0 =C2=A0 =C2=A0key =3D Par Eden Space;
>>> >> =C2=A0=C2=A0 =C2=A0 =C2=A0value =3D {
>>> >> =C2=A0=C2=A0 =C2=A0 =C2=A0 =C2=A0committed =3D 171835= 392;
>>> >> =C2=A0=C2=A0 =C2=A0 =C2=A0 =C2=A0init =3D 21495808; >>> >> =C2=A0=C2=A0 =C2=A0 =C2=A0 =C2=A0max =3D 171835392; >>> >> =C2=A0=C2=A0 =C2=A0 =C2=A0 =C2=A0used =3D 171835392;<= br> >>> >> =C2=A0=C2=A0 =C2=A0 =C2=A0 };
>>> >> =C2=A0=C2=A0 =C2=A0 };
>>> >> =C2=A0=C2=A0 =C2=A0( Par Survivor Space ) =3D {
>>> >> =C2=A0=C2=A0 =C2=A0 =C2=A0key =3D Par Survivor Space;=
>>> >> =C2=A0=C2=A0 =C2=A0 =C2=A0value =3D {
>>> >> =C2=A0=C2=A0 =C2=A0 =C2=A0 =C2=A0committed =3D 131072= 0;
>>> >> =C2=A0=C2=A0 =C2=A0 =C2=A0 =C2=A0init =3D 131072;
>>> >> =C2=A0=C2=A0 =C2=A0 =C2=A0 =C2=A0max =3D 1310720;
>>> >> =C2=A0=C2=A0 =C2=A0 =C2=A0 =C2=A0used =3D 0;
>>> >> =C2=A0=C2=A0 =C2=A0 =C2=A0 };
>>> >> =C2=A0=C2=A0 =C2=A0 };
>>> >> =C2=A0=C2=A0 };
>>> >> =C2=A0=C2=A0startTime =3D 42219136;
>>> >> =C2=A0};
>>> >> CollectionCount =3D 11720;
>>> >> CollectionTime =3D 4561730;
>>> >> Name =3D ParNew;
>>> >> Valid =3D true;
>>> >> MemoryPoolNames =3D [ Par Eden Space, Par Survivor Sp= ace ];
>>> >> #mbean =3D java.lang:type=3DOperatingSystem:
>>> >> MaxFileDescriptorCount =3D 63536;
>>> >> OpenFileDescriptorCount =3D 75;
>>> >> CommittedVirtualMemorySize =3D 17787711488;
>>> >> FreePhysicalMemorySize =3D 45522944;
>>> >> FreeSwapSpaceSize =3D 2123968512;
>>> >> ProcessCpuTime =3D 12251460000000;
>>> >> TotalPhysicalMemorySize =3D 8364417024;
>>> >> TotalSwapSpaceSize =3D 4294959104;
>>> >> Name =3D Linux;
>>> >> AvailableProcessors =3D 8;
>>> >> Arch =3D amd64;
>>> >> SystemLoadAverage =3D 4.36;
>>> >> Version =3D 2.6.18-164.15.1.el5;
>>> >> #mbean =3D java.lang:type=3DRuntime:
>>> >> Name =3D 20281@ob1061.nydc1.outbrain.com;
>>> >>
>>> >> ClassPath =3D
>>> >>
>>> >> /outbrain/cassandra/apache-cassandra-0.6.1/bin/../con= f:/outbrain/cassandra/apache-cassandra-0.6.1/bin/../build/classes:/outbrain= /cassandra/apache-cassandra-0.6.1/bin/..
>>> >>
>>> >>
>>> >> /lib/antlr-3.1.3.jar:/outbrain/cassandra/apache-cassa= ndra-0.6.1/bin/../lib/apache-cassandra-0.6.1.jar:/outbrain/cassandra/apache= -cassandra-0.6.1/bin/../lib/avro-1.2.0-dev.jar:/outb
>>> >>
>>> >>
>>> >> rain/cassandra/apache-cassandra-0.6.1/bin/../lib/clhm= -production.jar:/outbrain/cassandra/apache-cassandra-0.6.1/bin/../lib/commo= ns-cli-1.1.jar:/outbrain/cassandra/apache-cassandra-
>>> >>
>>> >>
>>> >> 0.6.1/bin/../lib/commons-codec-1.2.jar:/outbrain/cass= andra/apache-cassandra-0.6.1/bin/../lib/commons-collections-3.2.1.jar:/outb= rain/cassandra/apache-cassandra-0.6.1/bin/../lib/com
>>> >>
>>> >>
>>> >> mons-lang-2.4.jar:/outbrain/cassandra/apache-cassandr= a-0.6.1/bin/../lib/google-collections-1.0.jar:/outbrain/cassandra/apache-ca= ssandra-0.6.1/bin/../lib/hadoop-core-0.20.1.jar:/out
>>> >>
>>> >>
>>> >> brain/cassandra/apache-cassandra-0.6.1/bin/../lib/hig= h-scale-lib.jar:/outbrain/cassandra/apache-cassandra-0.6.1/bin/../lib/ivy-2= .1.0.jar:/outbrain/cassandra/apache-cassandra-0.6.1/
>>> >>
>>> >>
>>> >> bin/../lib/jackson-core-asl-1.4.0.jar:/outbrain/cassa= ndra/apache-cassandra-0.6.1/bin/../lib/jackson-mapper-asl-1.4.0.jar:/outbra= in/cassandra/apache-cassandra-0.6.1/bin/../lib/jline
>>> >>
>>> >>
>>> >> -0.9.94.jar:/outbrain/cassandra/apache-cassandra-0.6.= 1/bin/../lib/json-simple-1.1.jar:/outbrain/cassandra/apache-cassandra-0.6.1= /bin/../lib/libthrift-r917130.jar:/outbrain/cassandr
>>> >>
>>> >>
>>> >> a/apache-cassandra-0.6.1/bin/../lib/log4j-1.2.14.jar:= /outbrain/cassandra/apache-cassandra-0.6.1/bin/../lib/slf4j-api-1.5.8.jar:/= outbrain/cassandra/apache-cassandra-0.6.1/bin/../lib
>>> >> /slf4j-log4j12-1.5.8.jar;
>>> >>
>>> >> BootClassPath =3D
>>> >>
>>> >> /usr/java/jdk1.6.0_17/jre/lib/alt-rt.jar:/usr/java/jd= k1.6.0_17/jre/lib/resources.jar:/usr/java/jdk1.6.0_17/jre/lib/rt.jar:/usr/j= ava/jdk1.6.0_17/jre/lib/sunrsasign.j
>>> >>
>>> >>
>>> >> ar:/usr/java/jdk1.6.0_17/jre/lib/jsse.jar:/usr/java/j= dk1.6.0_17/jre/lib/jce.jar:/usr/java/jdk1.6.0_17/jre/lib/charsets.jar:/usr/= java/jdk1.6.0_17/jre/classes;
>>> >>
>>> >> LibraryPath =3D
>>> >>
>>> >> /usr/java/jdk1.6.0_17/jre/lib/amd64/server:/usr/java/= jdk1.6.0_17/jre/lib/amd64:/usr/java/jdk1.6.0_17/jre/../lib/amd64:/usr/java/= packages/lib/amd64:/lib:/usr/lib;
>>> >>
>>> >> VmName =3D Java HotSpot(TM) 64-Bit Server VM;
>>> >>
>>> >> VmVendor =3D Sun Microsystems Inc.;
>>> >>
>>> >> VmVersion =3D 14.3-b01;
>>> >>
>>> >> BootClassPathSupported =3D true;
>>> >>
>>> >> InputArguments =3D [ -ea, -Xms128M, -Xmx4G, -XX:Targe= tSurvivorRatio=3D90,
>>> >> -XX:+AggressiveOpts, -XX:+UseParNewGC, -XX:+UseConcMa= rkSweepGC,
>>> >> -XX:+CMSParallelRemarkEnabled, -XX:+HeapDumpOnOutOfMe= moryError,
>>> >> -XX:SurvivorRatio=3D128, -XX:MaxTenuringThreshold=3D0= ,
>>> >> -Dcom.sun.management.jmxremote.port=3D9004,
>>> >> -Dcom.sun.management.jmxremote.ssl=3Dfalse,
>>> >> -Dcom.sun.management.jmxremote.authenticate=3Dfalse,<= br> >>> >>
>>> >> -Dstorage-config=3D/outbrain/cassandra/apache-cassand= ra-0.6.1/bin/../conf,
>>> >> -Dcassandra-pidfile=3D/var/run/cassandra.pid ];
>>> >>
>>> >> ManagementSpecVersion =3D 1.2;
>>> >>
>>> >> SpecName =3D Java Virtual Machine Specification;
>>> >>
>>> >> SpecVendor =3D Sun Microsystems Inc.;
>>> >>
>>> >> SpecVersion =3D 1.0;
>>> >>
>>> >> StartTime =3D 1272911001415;
>>> >> ...
>>> >
>>
>>
>
>



--
Jonathan Ellis
Project Chair, Apache Cassandra
co-founder of Riptano, the source for professional Cassandra support
http://riptano.com

--0016e64f90b08e74950485ca0bbf--