Return-Path: X-Original-To: apmail-hadoop-mapreduce-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-mapreduce-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 42BEE11361 for ; Thu, 5 Jun 2014 01:59:41 +0000 (UTC) Received: (qmail 1652 invoked by uid 500); 5 Jun 2014 01:59:36 -0000 Delivered-To: apmail-hadoop-mapreduce-user-archive@hadoop.apache.org Received: (qmail 1538 invoked by uid 500); 5 Jun 2014 01:59:36 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 1531 invoked by uid 99); 5 Jun 2014 01:59:36 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 05 Jun 2014 01:59:36 +0000 X-ASF-Spam-Status: No, hits=2.2 required=5.0 tests=HTML_MESSAGE,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of jhkwak@kisti.re.kr designates 150.183.95.33 as permitted sender) Received: from [150.183.95.33] (HELO mail2.kisti.re.kr) (150.183.95.33) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 05 Jun 2014 01:59:32 +0000 Received: from [150.183.234.35] ([150.183.234.35]) by mail2.kisti.re.kr ([150.183.95.33]) with ESMTP id 1401933549.959016.1156491584.mail2 for ; Thu, 05 Jun 2014 10:59:10 +0900 (KST) Message-ID: <7E342A6BDA214B48B924322135343AB5@JaehyuckKwakPC> From: "Jae-Hyuck Kwak" To: Subject: hadoop on diskless cluster X-TERRACE-DUMMYSUBJECT: Terrace Spam system * Date: Thu, 5 Jun 2014 10:55:13 +0900 MIME-Version: 1.0 Content-Type: multipart/alternative; boundary="----=_NextPart_000_00CB_01CF80AC.A08D78B0" X-Priority: 3 X-MSMail-Priority: Normal Importance: Normal X-Mailer: Microsoft Windows Live Mail 15.4.3555.308 X-MimeOLE: Produced By Microsoft MimeOLE V15.4.3555.308 X-TERRACE-SPAMMARK: NO (SR:20.61) (by Terrace) X-Virus-Checked: Checked by ClamAV on apache.org ���� �κ����� ������ MIME ������ �޽����Դϴ�. ------=_NextPart_000_00CB_01CF80AC.A08D78B0 Content-Type: text/plain; charset="ks_c_5601-1987" Content-Transfer-Encoding: quoted-printable Dear all, I=A1=AFm trying to deploy hadoop on diskless cluster. I had mapred.local.dir to the shared location on NFS mount. When I execute TestDFSIO, I got the following error logs from the = console. $ hadoop jar hadoop-test-1.2.2-SNAPSHOT.jar TestDFSIO -write -nrFiles 2 = -fileSize 10 ... java.lang.RuntimeException: java.io.FileNotFoundException: = /home/hadoopadmin/mapred_scratch/taskTracker/hadoopadmin/jobcache/job_201= 406050112_0002/job.xml (No such file or directory) at = org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:1243= ) at = org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:111= 7) at = org.apache.hadoop.conf.Configuration.getProps(Configuration.java:1053) at = org.apache.hadoop.conf.Configuration.get(Configuration.java:397) at = org.apache.hadoop.mapred.JobConf.checkAndWarnDeprecation(JobConf.java:189= 9) at org.apache.hadoop.mapred.JobConf.(JobConf.java:399) at org.apache.hadoop.mapred.JobConf.(JobConf.java:389) at org.apache.hadoop.mapred.Child.main(Child.java:196) Caused by: java.io.FileNotFoundException: = /home/hadoopadmin/mapred_scratch/taskTracker/hadoopadmin/jobcache/job_201= 406050112_0002/job.xml (No such file or directory) at java.io.FileInputStream.open(Native Method) at java.io.FileInputStream.(FileInputStream.java:120) at = org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:1167= ) ... 7 more ... It works fine when I use separate directory for mapred.local.dir. But, I want it on shared directory due to diskless cluster. Is there any ideas how to do that? Any kind of your ideas will be appreciated. FYI, the following is my configuration.=20 /home directory is shared to all the nodes by NFS mount. hdfs-site.xml is no needed (I used NFS, not HDFS). core-site.xml: fs.default.name file:///home/hadoopadmin hadoop.tmp.dir /home/${user.name}/tmp_data mapred-site.xml: mapred.job.tracker 192.168.0.101:9001 mapred.child.java.opts -Xmx1024m mapred.local.dir /home/${user.name}/mapred_scratch true hdfs-site.xml: masters: 192.168.0.101 slaves: 192.168.0.102 192.168.0.103 192.168.0.104 192.168.0.105 Cheers, Jae-Hyuck ------=_NextPart_000_00CB_01CF80AC.A08D78B0 Content-Type: text/html; charset="ks_c_5601-1987" Content-Transfer-Encoding: quoted-printable
Dear all,
 
I=A1=AFm trying to deploy hadoop on diskless cluster.
I had mapred.local.dir to the shared location on NFS mount.
 
When I execute TestDFSIO, I got the following error logs from the=20 console.
$ hadoop jar hadoop-test-1.2.2-SNAPSHOT.jar TestDFSIO -write = -nrFiles 2=20 -fileSize 10
...
java.lang.RuntimeException: java.io.FileNotFoundException:=20 /home/hadoopadmin/mapred_scratch/taskTracker/hadoopadmin/jobcache/job_201= 406050112_0002/job.xml=20 (No such file or directory)
        at=20 org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:1243= )
        at=20 org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:111= 7)
        at=20 org.apache.hadoop.conf.Configuration.getProps(Configuration.java:1053)
        at=20 org.apache.hadoop.conf.Configuration.get(Configuration.java:397)
        at=20 org.apache.hadoop.mapred.JobConf.checkAndWarnDeprecation(JobConf.java:189= 9)
        at=20 org.apache.hadoop.mapred.JobConf.<init>(JobConf.java:399)
        at=20 org.apache.hadoop.mapred.JobConf.<init>(JobConf.java:389)
        at=20 org.apache.hadoop.mapred.Child.main(Child.java:196)
Caused by: java.io.FileNotFoundException:=20 /home/hadoopadmin/mapred_scratch/taskTracker/hadoopadmin/jobcache/job_201= 406050112_0002/job.xml=20 (No such file or directory)
        at=20 java.io.FileInputStream.open(Native Method)
        at=20 java.io.FileInputStream.<init>(FileInputStream.java:120)
        at=20 org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:1167= )
        ... 7 more
...
 
It works fine when I use separate directory for = mapred.local.dir.
But, I want it on shared directory due to diskless cluster.
 
Is there=20 any ideas how to do that?
Any kind of=20 your ideas will be appreciated.
 
FYI, the=20 following is my configuration.
/home=20 directory is shared to all the nodes by NFS mount.
hdfs-site.xml is=20 no needed (I used NFS, not HDFS).
  core-site.xml:
<configuration>
  <property>
    <name>fs.default.name</name>
   =20 <value>file:///home/hadoopadmin</value>
  </property>
  <property>
    <name>hadoop.tmp.dir</name>
   =20 <value>/home/${user.name}/tmp_data</value>
  </property>
</configuration>
  mapred-site.xml:
<configuration>
  <property>
    = <name>mapred.job.tracker</name>
    = <value>192.168.0.101:9001</value>
  </property>
  <property>
    = <name>mapred.child.java.opts</name>
    <value>-Xmx1024m</value>
  </property>
  <property>
    <name>mapred.local.dir</name>
   =20 <value>/home/${user.name}/mapred_scratch</value>
    <final>true</final>
  </property>
</configuration>
  hdfs-site.xml:
<configuration>
</configuration>
  masters: 192.168.0.101   slaves: 192.168.0.102 192.168.0.103 192.168.0.104 192.168.0.105   Cheers, Jae-Hyuck   ------=_NextPart_000_00CB_01CF80AC.A08D78B0--