hadoop-mapreduce-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Pavan Sudheendra <pavan0...@gmail.com>
Subject Re: Maven Cloudera Configuration problem
Date Tue, 13 Aug 2013 17:07:43 GMT
Yes Sandy, I'm referring to LocalJobRunner. I'm actually running the
job on one datanode..

What changes should i make so that my application would take advantage
of the cluster as a whole?

On Tue, Aug 13, 2013 at 10:33 PM,  <sandy.ryza@cloudera.com> wrote:
> Nothing in your pom.xml should affect the configurations your job runs with.
>
> Are you running your job from a node on the cluster? When you say localhost configurations,
do you mean it's using the LocalJobRunner?
>
> -sandy
>
> (iphnoe tpying)
>
> On Aug 13, 2013, at 9:07 AM, Pavan Sudheendra <pavan0591@gmail.com> wrote:
>
>> When i actually run the job on the multi node cluster, logs shows it
>> uses localhost configurations which i don't want..
>>
>> I just have a pom.xml which lists all the dependencies like standard
>> hadoop, standard hbase, standard zookeeper etc., Should i remove these
>> dependencies?
>>
>> I want the cluster settings to apply in my map-reduce application..
>> So, this is where i'm stuck at..
>>
>> On Tue, Aug 13, 2013 at 9:30 PM, Pavan Sudheendra <pavan0591@gmail.com> wrote:
>>> Hi Shabab and Sandy,
>>> The thing is we have a 6 node cloudera cluster running.. For
>>> development purposes, i was building a map-reduce application on a
>>> single node apache distribution hadoop with maven..
>>>
>>> To be frank, i don't know how to deploy this application on a multi
>>> node cloudera cluster. I am fairly well versed with Multi Node Apache
>>> Hadoop Distribution.. So, how can i go forward?
>>>
>>> Thanks for all the help :)
>>>
>>> On Tue, Aug 13, 2013 at 9:22 PM,  <sandy.ryza@cloudera.com> wrote:
>>>> Hi Pavan,
>>>>
>>>> Configuration properties generally aren't included in the jar itself unless
you explicitly set them in your java code. Rather they're picked up from the mapred-site.xml
file located in the Hadoop configuration directory on the host you're running your job from.
>>>>
>>>> Is there an issue you're coming up against when trying to run your job on
a cluster?
>>>>
>>>> -Sandy
>>>>
>>>> (iphnoe tpying)
>>>>
>>>> On Aug 13, 2013, at 4:19 AM, Pavan Sudheendra <pavan0591@gmail.com>
wrote:
>>>>
>>>>> Hi,
>>>>> I'm currently using maven to build the jars necessary for my
>>>>> map-reduce program to run and it works for a single node cluster..
>>>>>
>>>>> For a multi node cluster, how do i specify my map-reduce program to
>>>>> ingest the cluster settings instead of localhost settings?
>>>>> I don't know how to specify this using maven to build my jar.
>>>>>
>>>>> I'm using the cdh distribution by the way..
>>>>> --
>>>>> Regards-
>>>>> Pavan
>>>
>>>
>>>
>>> --
>>> Regards-
>>> Pavan
>>
>>
>>
>> --
>> Regards-
>> Pavan



-- 
Regards-
Pavan

Mime
View raw message