Return-Path: Delivered-To: apmail-hadoop-core-dev-archive@www.apache.org Received: (qmail 43252 invoked from network); 3 Mar 2008 04:33:59 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (140.211.11.2) by minotaur.apache.org with SMTP; 3 Mar 2008 04:33:59 -0000 Received: (qmail 56700 invoked by uid 500); 3 Mar 2008 04:33:54 -0000 Delivered-To: apmail-hadoop-core-dev-archive@hadoop.apache.org Received: (qmail 56670 invoked by uid 500); 3 Mar 2008 04:33:54 -0000 Mailing-List: contact core-dev-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: core-dev@hadoop.apache.org Delivered-To: mailing list core-dev@hadoop.apache.org Received: (qmail 56661 invoked by uid 99); 3 Mar 2008 04:33:54 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Sun, 02 Mar 2008 20:33:54 -0800 X-ASF-Spam-Status: No, hits=-2000.0 required=10.0 tests=ALL_TRUSTED X-Spam-Check-By: apache.org Received: from [140.211.11.140] (HELO brutus.apache.org) (140.211.11.140) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 03 Mar 2008 04:33:15 +0000 Received: from brutus (localhost [127.0.0.1]) by brutus.apache.org (Postfix) with ESMTP id 71604234C083 for ; Sun, 2 Mar 2008 20:32:50 -0800 (PST) Message-ID: <1622178287.1204518770462.JavaMail.jira@brutus> Date: Sun, 2 Mar 2008 20:32:50 -0800 (PST) From: "Amareshwari Sri Ramadasu (JIRA)" To: core-dev@hadoop.apache.org Subject: [jira] Updated: (HADOOP-2765) setting memory limits for tasks In-Reply-To: <12707779.1201858868032.JavaMail.jira@brutus> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Virus-Checked: Checked by ClamAV on apache.org [ https://issues.apache.org/jira/browse/HADOOP-2765?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Amareshwari Sri Ramadasu updated HADOOP-2765: --------------------------------------------- Status: Open (was: Patch Available) > setting memory limits for tasks > ------------------------------- > > Key: HADOOP-2765 > URL: https://issues.apache.org/jira/browse/HADOOP-2765 > Project: Hadoop Core > Issue Type: New Feature > Components: contrib/streaming > Affects Versions: 0.15.3 > Reporter: Joydeep Sen Sarma > Assignee: Amareshwari Sri Ramadasu > Fix For: 0.16.1 > > Attachments: patch-2765.txt, patch-2765.txt, patch-2765.txt, patch-2765.txt, patch-2765.txt, patch-2765.txt > > > here's the motivation: > we want to put a memory limit on user scripts to prevent runaway scripts from bringing down nodes. this setting is much lower than the max. memory that can be used (since most likely these tend to be scripting bugs). At the same time - for careful users, we want to be able to let them use more memory by overriding this limit. > there's no good way to do this. we can set ulimit in hadoop shell scripts - but they are very restrictive. there doesn't seem to be a way to do a setrlimit from Java - and setting a ulimit means that supplying a higher Xmx limit from the jobconf is useless (the java process will be limited by the ulimit setting when the tasktracker was launched). > what we have ended up doing (and i think this might help others as well) is to have a stream.wrapper option. the value of this option is a program through which streaming mapper and reducer scripts are execed. in our case, this wrapper is small C program to do a setrlimit and then exec of the streaming job. the default wrapper puts a reasonable limit on the memory usage - but users can easily override this wrapper (eg by invoking it with different memory limit argument). we can use the wrapper for other system wide resource limits (or any environment settings) as well in future. > This way - JVMs can stick to mapred.child.opts as the way to control memory usage. This setup has saved our ass on many occasions while allowing sophisticated users to use high memory limits. > Can submit patch if this sounds interesting. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.