Return-Path: Delivered-To: apmail-hadoop-mapreduce-dev-archive@minotaur.apache.org Received: (qmail 99751 invoked from network); 8 Feb 2011 20:40:33 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (140.211.11.3) by minotaur.apache.org with SMTP; 8 Feb 2011 20:40:33 -0000 Received: (qmail 32286 invoked by uid 500); 8 Feb 2011 20:40:33 -0000 Delivered-To: apmail-hadoop-mapreduce-dev-archive@hadoop.apache.org Received: (qmail 32212 invoked by uid 500); 8 Feb 2011 20:40:32 -0000 Mailing-List: contact mapreduce-dev-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: mapreduce-dev@hadoop.apache.org Delivered-To: mailing list mapreduce-dev@hadoop.apache.org Delivered-To: moderator for mapreduce-dev@hadoop.apache.org Received: (qmail 22819 invoked by uid 99); 8 Feb 2011 20:35:57 -0000 X-ASF-Spam-Status: No, hits=-0.7 required=5.0 tests=RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of mzizin@iponweb.net designates 209.85.214.48 as permitted sender) Message-ID: <4D51A90C.40701@iponweb.net> Date: Tue, 08 Feb 2011 23:35:24 +0300 From: Maxim Zizin Organization: IPonWeb User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.2.13) Gecko/20101207 Thunderbird/3.1.7 MIME-Version: 1.0 To: mapreduce-dev@hadoop.apache.org Subject: Re: JobTracker memory usage peaks once a day and OOM sometimes References: <4D517676.8080003@gmail.com> In-Reply-To: Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Allen, Thanks for your answer. Re: handful of jobs -- That was our first thought. But we looked at the logs and found nothing strange. Moreover after JT's restart the time the peaks start shifted. When we restarted it one more time it shifted again. In all cases first peak after restart starts in ~24 hours since restart. So this seems to be some scheduled daily thing or something and does not depend on the jobs we run. Re: heap size -- We have a cluster of 12 slaves. 2GB seems to be enough as it uses ~1GB normally and ~1.5GB during peaks. Although we're going to increase JT's heap size up to 3GB tomorrow. This will at least give us more time to pause crons and restart JT until it goes out of heap space next time. Or am I wrong when I think that the fact that our JT uses 1-1.5 GB means that 2GB of heap is enough? On 2/8/2011 11:16 PM, Allen Wittenauer wrote: > On Feb 8, 2011, at 8:59 AM, Maxim Zizin wrote: > >> Hi all, >> >> We monitor JT, NN and SNN memory usage and observe the following behavior in our Hadoop cluster. JT's heap size is set to 2000m. About 18 hours a day it uses ~1GB but every day roughly at the minute it was started its used memory increases to ~1.5GB and then decreases back to ~1GB in about 6 hours. Sometimes this takes a bit more than 6 hours, sometimes a bit less. I was wondering whether anyone here knows what JT does once a day that makes it use 1.5 times more memory than normally. >> >> We're so interested in JT memory usage because during last two weeks we twice had JT getting out of heap space. Both times right after those daily used memory peaks when it was going down from 1.5GB to 1GB it started increasing again until got stuck at ~2.2GB. After that it becomes unresponsive and we have to restart it. >> >> We're using Cloudera's CDH2 version 0.20.1+169.113. > Who knows what is happening in the CDH release? > > But in the normal job tracker, keep in mind that memory is consumed by every individual task listed on the main page. If you have some jobs that have extremely high task counts or a lot of counters or really long names or ..., then that is likely your problem. Chances are good you have a handful of jobs that are bad citizens that are getting scrolled off the page at the same time every day. > > Also, for any grid of any significant size, 2g of heap is way too small. -- Regards, Max