hadoop-mapreduce-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Mapred Learn <mapred.le...@gmail.com>
Subject Re: Change the storage directory.
Date Fri, 25 Feb 2011 00:38:24 GMT
Did you try running the stop and start scripts ?

On Thu, Feb 24, 2011 at 4:32 PM, real great..
<greatness.hardness@gmail.com>wrote:

> Hi,
> As i guess, Hadoop creates the default dfs in temp directory.
> I tried changing it by editing the hdfs-site.xml to:
> ?xml version="1.0"?>
> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>
> <!-- Put site-specific property overrides in this file. -->
>
> <configuration>
>   <property>
>     <name>dfs.replication</name>
>     <value>2</value>
>   </property>
>
>
> <property>
>   <name>dfs.data.dir</name>
>   <value>/home/supreme/hdfs</value>
>   <description>Comma separated list of paths on the local filesystem of a
> DataNode where it should store its blocks. </description>
> </property>
> </configuration>
>
> However, this does not seem to work as the hdfs is getting created at temp.
> directory even after i format the same.
>
> --
> Regards,
> R.V.
>

Mime
View raw message