hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Hemanth Yamijala <yhema...@yahoo-inc.com>
Subject Re: Integrate HADOOP and Map/Reduce paradigm into HPC environment
Date Tue, 02 Sep 2008 03:51:34 GMT
Allen Wittenauer wrote:
>
> On 8/18/08 11:33 AM, "Filippo Spiga" <spiga.filippo@gmail.com> wrote:
>   
>> Well but I haven't understand how I should configurate HOD to work in this
>> manner.
>>
>> For HDFS I folllow this sequence of steps
>> - conf/master contain only master node of my cluster
>> - conf/slaves contain all nodes
>> - I start HDFS using bin/start-dfs.sh
>>     
>
>     Right, fine...
>
>   
>> Potentially I would allow to use all nodes for MapReduce.
>> For HOD which parameter should I set in contrib/hod/conf/hodrc? Should I
>> change only the gridservice-hdfs section?
>>     
>
>     I was hoping the HOD folks would answer this question for you, but they
> are apparently sleeping. :)
>
>   
Woops ! Sorry, I missed this.
>     Anyway, yes, if you point gridservice-hdfs to a static HDFS,  it should
> use that as the -default- HDFS. That doesn't prevent a user from using HOD
> to create a custom HDFS as part of their job submission.
>
>   
Allen's answer is perfect. Please refer to 
http://hadoop.apache.org/core/docs/current/hod_user_guide.html#Using+an+external+HDFS
for more information about how to set up the gridservice-hdfs section to 
use a static or
external HDFS.



Mime
View raw message