hadoop-mapreduce-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From sam liu <samliuhad...@gmail.com>
Subject Can add a regular check in DataNode on free disk space?
Date Mon, 20 Oct 2014 02:56:04 GMT
Hi Experts and Developers,

At present, if a DataNode does not has free disk space, we can not get this
bad situation from anywhere, including DataNode log. At the same time,
under this situation, the hdfs writing operation will fail and return error
msg as below. However, from the error msg, user could not know the root
cause is that the only datanode runs out of disk space, and he also could
not get any useful hint in datanode log. So I believe it will be better if
we could add a regular check in DataNode on free disk space, and it will
add WARNING or ERROR msg in datanode log if that datanode runs out of
space. What's your opinion?

Error Msg:
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File
/user/hadoop/PiEstimator_TMP_3_141592654/in/part0 could only be replicated
to 0 nodes instead of minReplication (=1).  There are 1 datanode(s) running
and no node(s) are excluded in this operation.
        at
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(BlockManager.java:1441)
        at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2702)
        at
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:584)
        at
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:440)


Thanks!

Mime
View raw message