hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Konstantin Shvachko <...@yahoo-inc.com>
Subject Re: Hadoop over Lustre?
Date Wed, 03 Sep 2008 23:01:57 GMT
Great!
If you decide to run TestDFSIO on your cluster, please let me know.
I'll run the same on the same scale with hdfs and we can compare the numbers.
--Konstantin

Joel Welling wrote:
> That seems to have done the trick!  I am now running Hadoop 0.18
> straight out of Lustre, without an intervening HDFS.  The unusual things
> about my hadoop-site.xml are:
> 
> <property>
>   <name>fs.default.name</name>
>   <value>file:///bessemer/welling</value>
> </property>
> <property>
>   <name>mapred.system.dir</name>
>   <value>${fs.default.name}/hadoop_tmp/mapred/system</value>
>   <description>The shared directory where MapReduce stores control
> files.
>   </description>
> </property>
> 
> where /bessemer/welling is a directory on a mounted Lustre filesystem.
> I then do 'bin/start-mapred.sh' (without starting dfs), and I can run
> Hadoop programs normally.  I do have to specify full input and output
> file paths- they don't seem to be relative to fs.default.name .  That's
> not too troublesome, though.
> 
> Thanks very much!  
> -Joel
>  welling@psc.edu
> 
> On Fri, 2008-08-29 at 10:52 -0700, Owen O'Malley wrote:
>> Check the setting for mapred.system.dir. This needs to be a path that is on
>> a distributed file system. In old versions of Hadoop, it had to be on the
>> default file system, but that is no longer true. In recent versions, the
>> system dir only needs to be configured on the JobTracker and it is passed to
>> the TaskTrackers and clients.
> 
> 

Mime
View raw message