Return-Path: Delivered-To: apmail-hadoop-core-user-archive@www.apache.org Received: (qmail 33452 invoked from network); 12 Apr 2009 12:42:34 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (140.211.11.3) by minotaur.apache.org with SMTP; 12 Apr 2009 12:42:34 -0000 Received: (qmail 82317 invoked by uid 500); 12 Apr 2009 12:42:31 -0000 Delivered-To: apmail-hadoop-core-user-archive@hadoop.apache.org Received: (qmail 82226 invoked by uid 500); 12 Apr 2009 12:42:31 -0000 Mailing-List: contact core-user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: core-user@hadoop.apache.org Delivered-To: mailing list core-user@hadoop.apache.org Received: (qmail 82216 invoked by uid 99); 12 Apr 2009 12:42:31 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Sun, 12 Apr 2009 12:42:31 +0000 X-ASF-Spam-Status: No, hits=3.7 required=10.0 tests=HTML_MESSAGE,NORMAL_HTTP_TO_IP,SPF_PASS,WEIRD_PORT X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of nagmote.snehal@gmail.com designates 209.85.200.171 as permitted sender) Received: from [209.85.200.171] (HELO wf-out-1314.google.com) (209.85.200.171) by apache.org (qpsmtpd/0.29) with ESMTP; Sun, 12 Apr 2009 12:42:22 +0000 Received: by wf-out-1314.google.com with SMTP id 23so1502183wfg.2 for ; Sun, 12 Apr 2009 05:42:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:date:message-id:subject :from:to:content-type; bh=DFryN/M1Y7kDqRZI0S8RUc212f8skc+w/vQLgru4Yfc=; b=P2qhhprWtax0ma47l26D9qBvB5i3xB0KHiU6eTzJsLALgdMFLZ/KgJb6SUCevOhT2J 6MFWKZmZTKPELmYkrVrQ6t4/Xml42RxqxcwB/eEN9DVXBORCLCv297iSBnVf5ooIcw0H DRYDoCBiUTqvwjg3ptC5XXoMQIEY72YSJ18ck= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:date:message-id:subject:from:to:content-type; b=xyyysAQLCw7gKEm1Dk5uwJ61ayNof0HF+XkVVzaZtIXLialG0o++fteFENWDqHLXed a0oilG39xWpqRBtlh+R3c6vuOlbvtZM3HKKCxm2D37YbXxWWkiS5n3pYUllU3p52P3H6 rSUQsACc8JX6f5O3W2hn8BsX4pfyoq0K9zWqM= MIME-Version: 1.0 Received: by 10.142.58.2 with SMTP id g2mr2191995wfa.313.1239540121214; Sun, 12 Apr 2009 05:42:01 -0700 (PDT) Date: Sun, 12 Apr 2009 18:12:01 +0530 Message-ID: Subject: Exception Problem while running map-Reduce jobs :-Wrong FileSystem Exception From: snehal nagmote To: core-user@hadoop.apache.org Content-Type: multipart/alternative; boundary=001636e0b9f49c1f5a04675aecde X-Virus-Checked: Checked by ClamAV on apache.org --001636e0b9f49c1f5a04675aecde Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Hi, I am trying to create the Har files in hadoop 0.19.0, but on my hadoop cluster mapreduce jobs are not running , It gives very known exception 09/04/12 09:54:07 INFO mapred.FileInputFormat: Total input paths to process : 1 09/04/12 09:54:08 INFO mapred.JobClient: Running job: job_200904051339_0016 09/04/12 09:54:09 INFO mapred.JobClient: map 0% reduce 0% 09/04/12 09:54:18 INFO mapred.JobClient: Task Id : attempt_200904051339_0016_m_000003_0, Status : FAILED Error initializing attempt_200904051339_0016_m_000003_0: j*ava.lang.IllegalArgumentException: Wrong FS: hdfs:// 172.16.6.102:21011/tmp/hadoop-root/mapred/system/job_200904051339_0016/job.xml, expected: hdfs://namenodemc:21011* at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:322) at org.apache.hadoop.hdfs.DistributedFileSystem.checkPath(DistributedFileSystem.java:91) at org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:129) at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:390) at org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:699) at org.apache.hadoop.mapred.TaskTracker.startNewTask(TaskTracker.java:1636) at org.apache.hadoop.mapred.TaskTracker.access$1200(TaskTracker.java:102) at org.apache.hadoop.mapred.TaskTracker$TaskLauncher.run(TaskTracker.java:1602) 09/04/12 09:54:18 WARN mapred.JobClient: Error reading task outputhttp://cachenode1:50060/tasklog?plaintext=true&taskid=attempt_200904051339_0016_m_000003_0&filter=stdout 09/04/12 09:54:18 WARN mapred.JobClient: Error reading task outputhttp://cachenode1:50060/tasklog?plaintext=true&taskid=attempt_200904051339_0016_m_000003_0&filter=stderr 09/04/12 09:54:23 INFO mapred.JobClient: Task Id : attempt_200904051339_0016_m_000003_1, Status : FAILED Error initializing attempt_200904051339_0016_m_000003_1: java.lang.IllegalArgumentException: Wrong FS: hdfs:// 172.16.6.102:21011/tmp/hadoop-root/mapred/system/job_200904051339_0016/job.xml, expected: hdfs://namenodemc:21011 I searched it in the forum and *applied this patch* https://issues.apache.org/jira/browse/HADOOP-5191 Though i am not using aix or solaris , Then only its not working This problem arises when in *hadoop*-site.xml , ip addresses are used For Example NameNode machine has I have proper respected entry in */etc/hosts * 127.0.0.1 localhost.localdomain localhost ::1 localhost6.localdomain6 localhost6 172.16.19.125 cachenode1 *PS.: -cachenode1 is my datanode * */etc/sysconfig/network* NETWORKING=yes HOSTNAME=172.16.6.102 similar configuration is on datanode ... Can you please help? Thanks ... In advance Regards, Snehal Nagmote iiit-Hyderabad --001636e0b9f49c1f5a04675aecde--