hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From rajgopalv <raja.f...@gmail.com>
Subject Multiple directories for hadoop
Date Mon, 17 Jan 2011 15:01:27 GMT

I have a doubt in configuring dfs.data.dir,

One of my slave has 4 500GB harddisks. They are mounted on different mount
points : /data1 /data2 /data3 /data4
How can i make use of all the 4 harddisks for hdfs data and local jobcahe ? 

if i give comma seperated values for dfs.data.dir, will the total data be
replicated on all the 4 disks? , or the total data will be shared on the 4
disks (without replication) ? 

And, how to increase the space for local jobcache? 
http://www.mail-archive.com/core-user@hadoop.apache.org/msg04346.html
says hadoop.tmp.dir cannot be comma seperated.  But my mapreduce jobs will
eat a lot of jobcache in the local.

How should the configurartion be for the above scenario? 

-- 
View this message in context: http://old.nabble.com/Multiple-directories-for-hadoop-tp30676207p30676207.html
Sent from the HBase User mailing list archive at Nabble.com.


Mime
View raw message