hadoop-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From unmesha sreeveni <unmeshab...@gmail.com>
Subject Re: Can add a regular check in DataNode on free disk space?
Date Mon, 20 Oct 2014 04:37:57 GMT
1. Stop all Hadoop daemons
2. Remove all files from
3. Format namenode
4. Start all Hadoop daemons.

On Mon, Oct 20, 2014 at 8:26 AM, sam liu <samliuhadoop@gmail.com> wrote:

> Hi Experts and Developers,
> At present, if a DataNode does not has free disk space, we can not get
> this bad situation from anywhere, including DataNode log. At the same time,
> under this situation, the hdfs writing operation will fail and return error
> msg as below. However, from the error msg, user could not know the root
> cause is that the only datanode runs out of disk space, and he also could
> not get any useful hint in datanode log. So I believe it will be better if
> we could add a regular check in DataNode on free disk space, and it will
> add WARNING or ERROR msg in datanode log if that datanode runs out of
> space. What's your opinion?
> Error Msg:
> org.apache.hadoop.ipc.RemoteException(java.io.IOException): File
> /user/hadoop/PiEstimator_TMP_3_141592654/in/part0 could only be replicated
> to 0 nodes instead of minReplication (=1).  There are 1 datanode(s) running
> and no node(s) are excluded in this operation.
>         at
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(BlockManager.java:1441)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2702)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:584)
>         at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:440)
> Thanks!

*Thanks & Regards *

*Unmesha Sreeveni U.B*
*Hadoop, Bigdata Developer*
*Center for Cyber Security | Amrita Vishwa Vidyapeetham*

View raw message