hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Dieter De Witte <drdwi...@gmail.com>
Subject Re: mr1 and mr2
Date Sun, 11 May 2014 06:49:33 GMT
In my experience mixing APIs is probably the reason things do not work. If
you are using JobConf then you are not using MR1 I think. MR1 corresponds
to Hadoop 1.x.x and JobConf is from Hadoop 0.x. From hadoop 1 on you are
using a Job object and your mappers and reducers depend on a Context (not
on Reporter and OutputCommitter), this should allow you to do a version
check..
Regards, D


2014-05-11 1:52 GMT+02:00 Tony Dean <Tony.Dean@sas.com>:

>  Hi,
>
>
>
> I am trying to write a Java application that works with either MR1 and
> MR2.  At the present I have MR2 (YARN) implementation deployed and
> running.  I am using mapred API.  I believe that I read mapred and
> mapreduce APIs are compatible so either should work.  The only thing that
> is different is the configuration properties that need to be specified
> depending on whether the back-end is MR1 or MR2. BTW: I’m using CDH 4.6
> (Hadoop 2.0).
>
>
>
> My problem is that I can’t seem to submit a job to the cluster.  It always
> runs locally.  I setup JobConf with appropriate properties and submit the
> jobs using JobClient.  The properties that I set on JobConf are as follows:
>
>
>
> mapreduce.jobtracker.address=host:port (I know this is for MR1, but I’m
> trying everything)
>
> mapreduce.framework.name=yarn
>
> yarn.resourcemanager.address=host:port
>
> yarn.resourcemanager.host=host:port
>
>
>
> The last 2 are the same but I read 2 different ways to set it in different
> conflicting documentations.
>
>
>
> Anyway, can someone explain how to get this seemingly simple deployment to
> work?  What am I missing?
>
>
>
> Thanks!!!
>

Mime
View raw message