hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From praveenesh kumar <praveen...@gmail.com>
Subject Utilizing multiple hard disks for hadoop HDFS ?
Date Fri, 02 Dec 2011 05:35:57 GMT
Hi everyone,

So I have this blade server with 4x500 GB hard disks.
I want to use all these hard disks for hadoop HDFS.
How can I achieve this target ?

If I install hadoop on 1 hard disk and use other hard disk as normal
partitions eg.  -

/dev/sda1, -- HDD 1 -- Primary partition -- Linux + Hadoop installed on it
/dev/sda2, -- HDD 2 -- Mounted partition -- /mnt/dev/sda2
/dev/sda3, -- HDD3  -- Mounted partition -- /mnt/dev/sda3
/dev/sda4, -- HDD4  -- Mounted partition -- /mnt/dev/sda4

And if I create a hadoop.tmp.dir on each partition say --
"/tmp/hadoop-datastore/hadoop-hadoop"

and on core-site.xml, if I configure like --
<property>
    <name>hadoop.tmp.dir</name>

<value>/tmp/hadoop-datastore/hadoop-hadoop,/mnt/dev/sda2/tmp/hadoop-datastore/hadoop-hadoop,/mnt/dev/sda3/tmp/hadoop-datastore/hadoop-hadoop,/mnt/dev/sda4/tmp/hadoop-datastore/hadoop-hadoop</value>
    <description>A base for other temporary directories.</description>
</property>

Will it work ??

Can I set the above property for dfs.data.dir also ?

Thanks,
Praveenesh

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message