incubator-cassandra-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Zhu Han <schumi....@gmail.com>
Subject Re: Very high memory utilization (not caused by mmap on sstables)
Date Thu, 16 Dec 2010 07:10:55 GMT
The test node is behind a firewall. So I took some time to find a way to get
JMX diagnostic information from it.

What's interesting is, both the HeapMemoryUsage and NonHeapMemoryUsage
reported by JVM is quite reasonable.  So, it's a myth why the JVM process
maps such a big anonymous memory region...

$ java -Xmx128m -jar /tmp/cmdline-jmxclient-0.10.3.jar - localhost:8080
java.lang:type=Memory HeapMemoryUsage
12/16/2010 15:07:45 +0800 org.archive.jmx.Client HeapMemoryUsage:
committed: 1065025536
init: 1073741824
max: 1065025536
used: 18295328

$java -Xmx128m -jar /tmp/cmdline-jmxclient-0.10.3.jar - localhost:8080
java.lang:type=Memory NonHeapMemoryUsage
12/16/2010 15:01:51 +0800 org.archive.jmx.Client NonHeapMemoryUsage:
committed: 34308096
init: 24313856
max: 226492416
used: 21475376

If anybody is interested in it, I can provide more diagnostic information
before I restart the instance.

best regards,
hanzhu


On Thu, Dec 16, 2010 at 1:00 PM, Zhu Han <schumi.han@gmail.com> wrote:

> After investigating it deeper,  I suspect it's native memory leak of JVM.
> The large anonymous map on lower address space should be the native heap of
> JVM,  but not java object heap.  Has anybody met it before?
>
> I'll try to upgrade the JVM tonight.
>
> best regards,
> hanzhu
>
>
>
> On Thu, Dec 16, 2010 at 10:50 AM, Zhu Han <schumi.han@gmail.com> wrote:
>
>> Hi,
>>
>> I have a test node with apache-cassandra-0.6.8 on ubuntu 10.4.  The
>> hardware environment is an OpenVZ container. JVM settings is
>> # java -Xmx128m -version
>> java version "1.6.0_18"
>> OpenJDK Runtime Environment (IcedTea6 1.8.2) (6b18-1.8.2-4ubuntu2)
>> OpenJDK 64-Bit Server VM (build 16.0-b13, mixed mode)
>>
>> This is the memory settings:
>>
>> "/usr/bin/java -ea -Xms1G -Xmx1G ..."
>>
>> And the ondisk footprint of sstables is very small:
>>
>> "#du -sh data/
>>  "9.8M    data/"
>>
>> The node was infrequently accessed in the last  three weeks.  After that,
>> I observe the abnormal memory utilization by top:
>>
>>   PID USER      PR  NI  *VIRT*  *RES*  SHR S %CPU %MEM    TIME+
>> COMMAND
>>
>>  7836 root      15   0     *3300m* *2.4g*  13m S    0 26.0   2:58.51
>> java
>>
>> The jvm heap utilization is quite normal:
>>
>> #sudo jstat -gc -J"-Xmx128m" 7836
>>  S0C    S1C    S0U    S1U      *EC*       *EU*          *OC*            *
>> OU*            *PC           PU*          YGC  YGCT  FGC    FGCT
>> GCT
>> 8512.0 8512.0 372.8   0.0   *68160.0*   *5225.7*   *963392.0   508200.7
>> 30604.0 18373.4*    480    3.979      2      0.005    3.984
>>
>> And then I try "pmap" to see the native memory mapping. *There is two
>> large anonymous mmap regions.*
>>
>> 00000000080dc000 1573568K rw---    [ anon ]
>> 00002b2afc900000  1079180K rw---    [ anon ]
>>
>> The second one should be JVM heap.  What is the first one?  Mmap of
>> sstable should never be anonymous mmap, but file based mmap.  *Is it  a
>> native memory leak?  *Does cassandra allocate any DirectByteBuffer?
>>
>> best regards,
>> hanzhu
>>
>
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message