hadoop-mapreduce-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Jae-Hyuck Kwak" <jhk...@kisti.re.kr>
Subject hadoop on diskless cluster
Date Thu, 05 Jun 2014 01:55:13 GMT
Dear all,

I’m trying to deploy hadoop on diskless cluster.
I had mapred.local.dir to the shared location on NFS mount.

When I execute TestDFSIO, I got the following error logs from the console.
$ hadoop jar hadoop-test-1.2.2-SNAPSHOT.jar TestDFSIO -write -nrFiles 2 -fileSize 10
...
java.lang.RuntimeException: java.io.FileNotFoundException: /home/hadoopadmin/mapred_scratch/taskTracker/hadoopadmin/jobcache/job_201406050112_0002/job.xml
(No such file or directory)
        at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:1243)
        at org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:1117)
        at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:1053)
        at org.apache.hadoop.conf.Configuration.get(Configuration.java:397)
        at org.apache.hadoop.mapred.JobConf.checkAndWarnDeprecation(JobConf.java:1899)
        at org.apache.hadoop.mapred.JobConf.<init>(JobConf.java:399)
        at org.apache.hadoop.mapred.JobConf.<init>(JobConf.java:389)
        at org.apache.hadoop.mapred.Child.main(Child.java:196)
Caused by: java.io.FileNotFoundException: /home/hadoopadmin/mapred_scratch/taskTracker/hadoopadmin/jobcache/job_201406050112_0002/job.xml
(No such file or directory)
        at java.io.FileInputStream.open(Native Method)
        at java.io.FileInputStream.<init>(FileInputStream.java:120)
        at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:1167)
        ... 7 more
...

It works fine when I use separate directory for mapred.local.dir.
But, I want it on shared directory due to diskless cluster.

Is there any ideas how to do that?
Any kind of your ideas will be appreciated.

FYI, the following is my configuration. 
/home directory is shared to all the nodes by NFS mount.
hdfs-site.xml is no needed (I used NFS, not HDFS).

core-site.xml:
<configuration>
  <property>
    <name>fs.default.name</name>
    <value>file:///home/hadoopadmin</value>
  </property>
  <property>
    <name>hadoop.tmp.dir</name>
    <value>/home/${user.name}/tmp_data</value>
  </property>
</configuration>

mapred-site.xml:
<configuration>
  <property>
    <name>mapred.job.tracker</name>
    <value>192.168.0.101:9001</value>
  </property>
  <property>
    <name>mapred.child.java.opts</name>
    <value>-Xmx1024m</value>
  </property>
  <property>
    <name>mapred.local.dir</name>
    <value>/home/${user.name}/mapred_scratch</value>
    <final>true</final>
  </property>
</configuration>

hdfs-site.xml:
<configuration>
</configuration>

masters:
192.168.0.101

slaves:
192.168.0.102
192.168.0.103
192.168.0.104
192.168.0.105

Cheers,
Jae-Hyuck

Mime
View raw message