hadoop-mapreduce-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Geelong Yao <geelong...@gmail.com>
Subject Re: How can I add a new hard disk in an existing HDFS cluster?
Date Fri, 03 May 2013 08:14:29 GMT
you can change the setting of data.dfs.dir in hdfs-site.xml if your version
is 1.x
<property>
        <name>data.dfs.dir</name>
        <value>/usr/hadoop/tmp/dfs/data, /dev/vdb </value>
    </property>


2013/5/3 Joarder KAMAL <joarderm@gmail.com>

> Hi,
>
>  I have a running HDFS cluster (Hadoop/HBase) consists of 4 nodes and the
> initial hard disk (/dev/vda1) size is 10G only. Now I have a second hard
> drive /dev/vdb of 60GB size and want to add it into my existing HDFS
> cluster. How can I format the new hard disk (and in which format? XFS?) and
> mount it to work with HDFS
>
> Default HDFS directory is situated in
> /usr/local/hadoop-1.0.4/hadoop-datastore
> And I followed this link for installation.
>
> http://ankitasblogger.blogspot.com.au/2011/01/hadoop-cluster-setup.html
>
> Many thanks in advance :)
>
>
> Regards,
> Joarder Kamal
>



-- 
>From Good To Great

Mime
View raw message