hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "eric baldeschwieler (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-163) If a DFS datanode cannot write onto its file system. it should tell the name node not to assign new blocks to it.
Date Fri, 19 May 2006 18:35:33 GMT
    [ http://issues.apache.org/jira/browse/HADOOP-163?page=comments#action_12412571 ] 

eric baldeschwieler commented on HADOOP-163:
--------------------------------------------

We are trying to deal with the case that the node is misconfigured / broken.  Trying to operate
in these situations is hard.  Simpler to fail fast, IMO.  This leverages the designed strengths
of HDFS.  Our goal is to get the information to the operator so they can diagnose and fix
the problem and seal the problem off from the cluster.

This is distinct from the case that the node is simply full.  That would not trigger this
condition.

> If a DFS datanode cannot write onto its file system. it should tell the name node not
to assign new blocks to it.
> -----------------------------------------------------------------------------------------------------------------
>
>          Key: HADOOP-163
>          URL: http://issues.apache.org/jira/browse/HADOOP-163
>      Project: Hadoop
>         Type: Bug

>   Components: dfs
>     Versions: 0.2
>     Reporter: Runping Qi
>     Assignee: Hairong Kuang
>      Fix For: 0.3

>
> I observed that sometime, if a file of a data node is not mounted properly, it may not
be writable. In this case, any data writes will fail. The name node should stop assigning
new blocks to that data node. The webpage should show that node is in an abnormal state.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira


Mime
View raw message