Return-Path: Delivered-To: apmail-hadoop-mapreduce-dev-archive@minotaur.apache.org Received: (qmail 499 invoked from network); 4 Mar 2011 05:55:38 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (140.211.11.3) by minotaur.apache.org with SMTP; 4 Mar 2011 05:55:38 -0000 Received: (qmail 35611 invoked by uid 500); 4 Mar 2011 05:55:38 -0000 Delivered-To: apmail-hadoop-mapreduce-dev-archive@hadoop.apache.org Received: (qmail 35561 invoked by uid 500); 4 Mar 2011 05:55:37 -0000 Mailing-List: contact mapreduce-dev-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: mapreduce-dev@hadoop.apache.org Delivered-To: mailing list mapreduce-dev@hadoop.apache.org Received: (qmail 35547 invoked by uid 99); 4 Mar 2011 05:55:37 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 04 Mar 2011 05:55:37 +0000 X-ASF-Spam-Status: No, hits=-0.0 required=5.0 tests=SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of bhallamudi.kamesh@huawei.com designates 119.145.14.64 as permitted sender) Received: from [119.145.14.64] (HELO szxga01-in.huawei.com) (119.145.14.64) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 04 Mar 2011 05:55:31 +0000 Received: from huawei.com (szxga05-in [172.24.2.49]) by szxga05-in.huawei.com (iPlanet Messaging Server 5.2 HotFix 2.14 (built Aug 8 2006)) with ESMTP id <0LHI00KCSR3YDV@szxga05-in.huawei.com> for mapreduce-dev@hadoop.apache.org; Fri, 04 Mar 2011 13:55:10 +0800 (CST) Received: from szxeml201-edg.china.huawei.com ([172.24.2.119]) by szxga05-in.huawei.com (iPlanet Messaging Server 5.2 HotFix 2.14 (built Aug 8 2006)) with ESMTP id <0LHI000EMR3WPV@szxga05-in.huawei.com> for mapreduce-dev@hadoop.apache.org; Fri, 04 Mar 2011 13:55:10 +0800 (CST) Received: from SZXEML404-HUB.china.huawei.com (10.82.67.59) by szxeml201-edg.china.huawei.com (172.24.2.39) with Microsoft SMTP Server (TLS) id 14.1.270.1; Fri, 04 Mar 2011 13:54:58 +0800 Received: from BLRNSHTIPL8NC (10.18.1.38) by szxeml404-hub.china.huawei.com (10.82.67.59) with Microsoft SMTP Server id 14.1.270.1; Fri, 04 Mar 2011 13:55:07 +0800 Date: Fri, 04 Mar 2011 11:25:07 +0530 From: Bhallamudi Venkata Siva Kamesh Subject: FW: Exception due to inproper configuration X-Originating-IP: [10.18.1.38] To: mapreduce-dev@hadoop.apache.org Message-id: MIME-version: 1.0 X-MIMEOLE: Produced By Microsoft MimeOLE V6.00.3790.4657 X-Mailer: Microsoft Office Outlook 11 Content-type: text/plain; charset=us-ascii Content-transfer-encoding: 7BIT Thread-index: AcvSgkhHs/UQunebQn2F8IlljkoXSwHrkrIg Hi All, Please respond... -----Original Message----- From: Bhallamudi kamesh [mailto:bhallamudi.kamesh@huawei.com] Sent: Tuesday, February 22, 2011 4:49 PM To: mapreduce-dev@hadoop.apache.org Subject: Exception due to inproper configuration Hi All, When we submit a job through job client, job's jar in mapred.system.dir directory will be replicated as per configuration parameter mapred.submit.replication, which is present in mapred-default.xml.By default this value 10. Now there are chances of configuring dfs.replication and dfs.replication.max, which are present in hdfs-site.xml, independent of mapred.submit.replication. Suppose user has configured dfs.replication.max as 5(say), then the following exception will be thrown org.apache.hadoop.ipc.RemoteException: java.io.IOException: file /home/kamesh/hadoop/hadoop-root/mapred/system/job_201102221545_0001/job.jar. Requested replication 10 exceeds maximum 2 at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.verifyReplication(FSName system.java:1179) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setReplicationInternal(F SNamesystem.java:1130) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setReplication(FSNamesys tem.java:1115) at org.apache.hadoop.hdfs.server.namenode.NameNode.setReplication(NameNode.java :630) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39 ) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl .java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:514) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:991) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:987) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:985) As per the code this absolutely correct, and it should be so. However, I feel setting replication as maximum replication, when replication exceeds maximum replication. This even ensures application execution. Even this behavior has both pros and cons. It executes the application. However it will not replicate as per the user given configuration. What do you think? Thanks Regards, Bh.V.S.Kamesh.