hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Andrey Stepachev <oct...@gmail.com>
Subject Re: Java Commited Virtual Memory significally larged then Heap Memory
Date Tue, 11 Jan 2011 20:55:20 GMT
I tried to set MALLOC_ARENA_MAX=2. But still the same issue like in LZO
problem thread. All those 65M blocks here. And JVM continues to eat memory
on heavy write load. And yes, I use "improved" kernel
Linux 2.6.34.7-0.5.

2011/1/11 Xavier Stevens <xstevens@mozilla.com>

> Are you using a newer linux kernel with the new and "improved" memory
> allocator?
>
> If so try setting this in hadoop-env.sh:
>
> export MALLOC_ARENA_MAX=<number of cores you want to use>
>
> Maybe start by setting it to 4.  You can thank Todd Lipcon if this works
> for you.
>
> Cheers,
>
>
> -Xavier
>
> On 1/11/11 7:24 AM, Andrey Stepachev wrote:
> > No. I don't use LZO. I tried even remove any native support (i.e. all .so
> > from class path)
> > and use java gzip. But nothing.
> >
> >
> > 2011/1/11 Friso van Vollenhoven <fvanvollenhoven@xebia.com>
> >
> >> Are you using LZO by any chance? If so, which version?
> >>
> >> Friso
> >>
> >>
> >> On 11 jan 2011, at 15:57, Andrey Stepachev wrote:
> >>
> >>> After starting the hbase in jroĐükit found the same memory leakage.
> >>>
> >>> After the launch
> >>>
> >>> Every 2,0 s: date & & ps - sort =- rss-eopid, rss, vsz, pcpu | head
> >>> Tue Jan 11 16:49:31 2011
> >>>
> >>>   11 16:49:31 MSK 2011
> >>>   PID RSS VSZ% CPU
> >>>  7863 2547760 5576744 78.7
> >>>
> >>>
> >>>
> >>> JR dumps:
> >>>
> >>> Total mapped 5576740KB (reserved = 2676404KB) - Java heap 2048000KB
> >>> (reserved = 1472176KB) - GC tables 68512KB - Thread stacks 37236KB (#
> >>> threads = 111) - Compiled code 1048576KB (used = 2599KB) - Internal
> >>> 1224KB - OS 549688KB - Other 1800976KB - Classblocks 1280KB (malloced
> >>> = 1110KB # 3285) - Java class data 20224KB (malloced = 20002KB # 15134
> >>> in 3285 classes) - Native memory tracking 1024KB (malloced = 325KB +10
> >>> KB # 20)
> >>>
> >>>
> >>>
> >>> After running the mr which make high write load (~1hour)
> >>>
> >>> Every 2,0 s: date & & ps - sort =- rss-eopid, rss, vsz, pcpu | head
> >>> Tue Jan 11 17:08:56 2011
> >>>
> >>>   11 17:08:56 MSK 2011
> >>>   PID RSS VSZ% CPU
> >>>  7863 4072396 5459572 100
> >>>
> >>>
> >>>
> >>> JR said not important below specify why)
> >>>
> >>> http://paste.ubuntu.com/552820/
> >>> <http://paste.ubuntu.com/552820/>
> >>>
> >>>
> >>> 7863:
> >>> Total mapped                  5742628KB +165888KB (reserved=1144000KB
> >>> -1532404KB)
> >>> -              Java heap      2048000KB           (reserved=0KB
> >> -1472176KB)
> >>> -              GC tables        68512KB
> >>> -          Thread stacks        38028KB    +792KB (#threads=114 +3)
> >>> -          Compiled code      1048576KB           (used=3376KB +776KB)
> >>> -               Internal         1480KB    +256KB
> >>> -                     OS       517944KB  -31744KB
> >>> -                  Other      1996792KB +195816KB
> >>> -            Classblocks         1280KB           (malloced=1156KB
> >>> +45KB #3421 +136)
> >>> -        Java class data        20992KB    +768KB (malloced=20843KB
> >>> +840KB #15774 +640 in 3421 classes)
> >>> - Native memory tracking         1024KB           (malloced=325KB +10KB
> >> #20)
> >>>
> >>>
> >>
> +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
> >>>    OS                          *java    r x 0x0000000000400000.(
> >> 76KB)
> >>>    OS                          *java    rw  0x0000000000612000 (
> >>  4KB)
> >>>    OS                        *[heap]    rw  0x0000000000613000.(
> >> 478712KB)
> >>>   INT                           Poll    r   0x000000007fffe000 (
> >>  4KB)
> >>>   INT                         Membar    rw  0x000000007ffff000.(
> >>  4KB)
> >>>   MSP              Classblocks (1/2)    rw  0x0000000082ec0000 (
> >>  768KB)
> >>>   MSP              Classblocks (2/2)    rw  0x0000000082f80000 (
> >>  512KB)
> >>>  HEAP                      Java heap    rw
> >>  0x0000000083000000.(2048000KB)
> >>>                                         rw  0x00007f2574000000 (
> >>  65500KB)
> >>>                                             0x00007f2577ff7000.(
> >> 36KB)
> >>>                                         rw  0x00007f2584000000 (
> >>  65492KB)
> >>>                                             0x00007f2587ff5000.(
> >> 44KB)
> >>>                                         rw  0x00007f258c000000 (
> >>  65500KB)
> >>>                                             0x00007f258fff7000 (
> >> 36KB)
> >>>                                         rw  0x00007f2590000000 (
> >>  65500KB)
> >>>                                             0x00007f2593ff7000 (
> >> 36KB)
> >>>                                         rw  0x00007f2594000000 (
> >>  65500KB)
> >>>                                             0x00007f2597ff7000 (
> >> 36KB)
> >>>                                         rw  0x00007f2598000000 (
> >> 131036KB)
> >>>                                             0x00007f259fff7000 (
> >> 36KB)
> >>>                                         rw  0x00007f25a0000000 (
> >>  65528KB)
> >>>                                             0x00007f25a3ffe000 (
> >>  8KB)
> >>>                                         rw  0x00007f25a4000000 (
> >>  65496KB)
> >>>                                             0x00007f25a7ff6000 (
> >> 40KB)
> >>>                                         rw  0x00007f25a8000000 (
> >>  65496KB)
> >>>                                             0x00007f25abff6000 (
> >> 40KB)
> >>>                                         rw  0x00007f25ac000000 (
> >>  65504KB)
> >>>
> >>>
> >>> So, the difference was in the pieces of memory like this:
> >>>
> >>> rw 0x00007f2590000000 (65500KB)
> >>>     0x00007f2593ff7000 (36KB)
> >>>
> >>>
> >>> Looks like HLog allocates memory (looks like HLog, becase it is very
> >> similar
> >>> size)
> >>>
> >>> If we count this blocks we get amount of lost memory:
> >>>
> >>> 65M * 32 + 132M = 2212M
> >>>
> >>> So, it looks like HLog allcates to many memory, and question is: how to
> >>> restrict it?
> >>>
> >>> 2010/12/30 Andrey Stepachev <octo47@gmail.com>
> >>>
> >>>> Hi All.
> >>>>
> >>>> After heavy load into hbase (single node, nondistributed test system)
> I
> >> got
> >>>> 4Gb process size of my HBase java process.
> >>>> On 6GB machine there was no room for anything else (disk cache and so
> >> on).
> >>>> Does anybody knows, what is going on, and how you solve this. What
> heap
> >>>> memory is set on you hosts
> >>>> and how much of RSS hbase process actually use.
> >>>>
> >>>> I don't see such things before, all tomcat and other java apps don't
> >> eats
> >>>> significally more memory then -Xmx.
> >>>>
> >>>> Connection name:   pid: 23476 org.apache.hadoop.hbase.master.HMaster
> >>>> start   Virtual Machine:   Java HotSpot(TM) 64-Bit Server VM version
> >>>> 17.1-b03   Vendor:   Sun Microsystems Inc.   Name:   23476@mars
> >>  Uptime:   12
> >>>> hours 4 minutes   Process CPU time:   5 hours 45 minutes   JIT
> compiler:
> >>   HotSpot
> >>>> 64-Bit Server Compiler   Total compile time:   19,223 seconds
> >>>> ------------------------------
> >>>>    Current heap size:     703 903 kbytes   Maximum heap size:   2 030
> >> 976kbytes    Committed memory:
> >>>> 2 030 976 kbytes   Pending finalization:   0 objects      Garbage
> >>>> collector:   Name = 'ParNew', Collections = 9 990, Total time spent
=
> 5
> >>>> minutes   Garbage collector:   Name = 'ConcurrentMarkSweep',
> Collections
> >> =
> >>>> 20, Total time spent = 35,754 seconds
> >>>> ------------------------------
> >>>>    Operating System:   Linux 2.6.34.7-0.5-xen   Architecture:   amd64
> >>  Number of processors:
> >>>> 8   Committed virtual memory:   4 403 512 kbytes     Total physical
> >>>> memory:   6 815 744 kbytes   Free physical memory:      82 720 kbytes
> >>  Total swap space:
> >>>> 8 393 924 kbytes   Free swap space:   8 050 880 kbytes
> >>>>
> >>>>
> >>>>
> >>>>
> >>
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message