hadoop-mapreduce-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Vinod Kumar Vavilapalli (JIRA)" <j...@apache.org>
Subject [jira] [Resolved] (MAPREDUCE-5901) Hadoop 2.4 Java execution issue: remotely submission jobs fail
Date Wed, 21 May 2014 22:41:38 GMT

     [ https://issues.apache.org/jira/browse/MAPREDUCE-5901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Vinod Kumar Vavilapalli resolved MAPREDUCE-5901.
------------------------------------------------

      Resolution: Won't Fix
    Release Note:   (was: 2.4)

Folks have reported working of multiple node clusters.

In any case, please use the user mailing lists for debugging such issues. JIRA is for tracking
bugs.

> Hadoop 2.4 Java execution issue: remotely submission jobs fail 
> ---------------------------------------------------------------
>
>                 Key: MAPREDUCE-5901
>                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5901
>             Project: Hadoop Map/Reduce
>          Issue Type: New Feature
>         Environment: java, hadoop v. 2.4 
>            Reporter: michele
>
> I have installed Hadoop 2.4 on remote machine in Single-Mode setting. From another machine
(client) I run a Java application that submit a job to a remote Hadoop machine (cluster),
I have used the attached code. The problem is that the real execution of the map process is
run on my local machine (client) not on the cluster machine.
> JobConf job = new JobConf(SOF.class);
> job.setJobName("SIM-"+sim_id);
> System.setProperty("HADOOP_USER_NAME", "hadoop");
> FileInputFormat.addInputPath(job,new Path("hdfs://cluster_ip:port"+USERS_HOME+user+"/SIM-"+sim_id+"/"+INPUT_FOLDER_HOME+"/input.tmp")/*new_inputs_path*/);
> FileOutputFormat.setOutputPath(job, new Path("hdfs://cluster_ip:port"+USERS_HOME+user+"/SIM-"+sim_id+"/"+OUTPUT_FOLDER_HOME));
> job.set("jar.work.directory", "hdfs://cluster_ip:port"+SOF.USERS_HOME+user+"/SIM-"+sim_id+"/flockers.jar");
> job.setMapperClass(Mapper.class);
> job.setReducerClass(Reducer.class);
> job.setOutputKeyClass(org.apache.hadoop.io.Text.class);
> job.setOutputValueClass(org.apache.hadoop.io.Text.class);
> job.set("mapred.job.tracker", "cluster_ip:port");
> job.set("fs.default.name", "hdfs://cluster_ip:port");
>  job.set("hadoop.job.ugi", "hadoop,hadoop");
> job.set("user", "hadoop");       
>         try {
>             JobClient jobc=new JobClient(job);
>             System.out.println(jobc+" "+job);
>             RunningJob runjob;
>             runjob = jobc.submitJob(job);
>             System.out.println(runjob);
>             System.out.println("VM "+Inet4Address.getLocalHost());
>             while(runjob.getJobStatus().equals(JobStatus.SUCCEEDED)){}
>         } catch (Exception e) {
>             // TODO Auto-generated catch block
>             e.printStackTrace();
>         } 
>     }
> I have tried to set up correctly hadoop using the following mapred-site.xml:
> <configuration>
>      <property>
>          <name>mapred.job.tracker</name>
>          <value>cluster_ip:port</value>
>      </property>
>  <property>
>         <name>mapreduce.framework.name</name>
>         <value>yarn</value>
>     </property>
> </configuration>



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Mime
View raw message