hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Zheng, Kai" <kai.zh...@intel.com>
Subject RE: dfs.datanode.failed.volumes.tolerated change
Date Fri, 08 Jan 2016 08:39:30 GMT
As far as I know, Hadoop 2.6 supports disk hot-swapping on a DataNode without restarting the
DataNode. Roughly you need to do two operations:
1) change dfs.datanode.data.dir in the DataNode configuration to update according to your
removed/added disks;
2) let the DataNode reload its configuration.

Please google and checkout related docs for this feature. Hope this helps.


From: yaoxiaohua [mailto:yaoxiaohua@outlook.com]
Sent: Friday, January 08, 2016 11:24 AM
To: user@hadoop.apache.org
Subject: dfs.datanode.failed.volumes.tolerated change

                The datanode process shutdown abnormally,
                I set dfs.datanode.failed.volumes.tolerated =2 in hdfs-site.xml.
                Because I found one disk is fail to access.
                Then restart the datanode process, it works.

                One day later, we replace a good harddisk , and create folder and chown to
                Then I want to know if I don't restart datanode process, when the datanode
can know that
                The disk is good now?
                May I have to restart datanode process to update this?

Env:Hadoop 2.6

Best Regards,

View raw message