Return-Path: X-Original-To: apmail-hadoop-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 8E8CA1775B for ; Fri, 17 Oct 2014 18:25:27 +0000 (UTC) Received: (qmail 89591 invoked by uid 500); 17 Oct 2014 18:25:23 -0000 Delivered-To: apmail-hadoop-user-archive@hadoop.apache.org Received: (qmail 89483 invoked by uid 500); 17 Oct 2014 18:25:23 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 89473 invoked by uid 99); 17 Oct 2014 18:25:22 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 17 Oct 2014 18:25:22 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of regestrer@gmail.com designates 209.85.212.172 as permitted sender) Received: from [209.85.212.172] (HELO mail-wi0-f172.google.com) (209.85.212.172) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 17 Oct 2014 18:25:16 +0000 Received: by mail-wi0-f172.google.com with SMTP id n3so2758745wiv.17 for ; Fri, 17 Oct 2014 11:24:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=message-id:date:from:user-agent:mime-version:to:subject:references :in-reply-to:content-type; bh=+2ELACIPl6JjA7XICDB9fvv0SY6KDYpuHZBzrdTaEJg=; b=Qg8zBWjV+/3UPASEasO+yLJShzRWNUt+jrylD+Ou8cB4YTjvNuaoTtIoPV/Yr7TZw2 8dThMWgh/KtcwSH1vzA2nL/3WOT0yYAuELb/IrqnJ/B5S/SrQJVuUGSSK7sj7XhOkgkl XwLSmAUD0xjGA0daOiEU3xhKXz+SY6nbQID/ABBhlaLpBwKgTxbX4Slj1Te8ueLuOe7l jkF0j1+bXi1mJ+tzh4wE3HGfRJmuu178JqTHUSsVVnb5B8OKiF9cpiC/dgPgVTrJEewa Ktj09t3tyXmayuEWNAb679Nq7Gq82gzdwnP2IPJZ+ipacg6FR7G9iwmXwAf6F7Jo9dFl 3gyQ== X-Received: by 10.194.185.229 with SMTP id ff5mr5881839wjc.122.1413570295064; Fri, 17 Oct 2014 11:24:55 -0700 (PDT) Received: from [192.168.1.190] ([46.238.12.171]) by mx.google.com with ESMTPSA id eg8sm352070wib.15.2014.10.17.11.24.53 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Fri, 17 Oct 2014 11:24:54 -0700 (PDT) Message-ID: <54415EEC.2070302@gmail.com> Date: Fri, 17 Oct 2014 21:24:44 +0300 From: peter 2 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:24.0) Gecko/20100101 Thunderbird/24.6.0 MIME-Version: 1.0 To: user@hadoop.apache.org Subject: Dynamically set map / reducer memory References: In-Reply-To: Content-Type: multipart/alternative; boundary="------------010904040206090208090306" X-Virus-Checked: Checked by ClamAV on apache.org This is a multi-part message in MIME format. --------------010904040206090208090306 Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit HI Guys, I am trying to run a few MR jobs in a succession, some of the jobs don't need that much memory and others do. I want to be able to tell hadoop how much memory should be allocated for the mappers of each job. I know how to increase the memory for a mapper JVM, through the mapred xml. I tried manually setting the mapreduce.reduce.java.opts= -Xmxm , but wasn't picked up by the mapper jvm, the global setting was always been picked up . In summation Job 1 - Mappers need only 250 Mg of Ram Job2 - Mapper Reducer need around - 2Gb I don't want to be able to set those restrictions prior to submitting the job to my hadoop cluster. --------------010904040206090208090306 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: 8bit HI Guys,
I am trying to run a few MR jobs in a succession, some of the jobs don't need that much memory and others do. I want to be able to tell hadoop how much memory should be allocated  for the mappers of each job.
I know how to increase the memory for a mapper JVM, through the mapred xml.
I tried manually setting the  mapreduce.reduce.java.opts = -Xmx<someNumber>m , but wasn't picked up by the mapper jvm, the global setting was always been picked up .

In summation
Job 1 - Mappers need only 250 Mg of Ram
Job2 - Mapper
           Reducer need around - 2Gb

I don't want to be able to set those restrictions prior to submitting the job to my hadoop cluster.
--------------010904040206090208090306--