hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Konstantin Shvachko (JIRA)" <j...@apache.org>
Subject [jira] Created: (HADOOP-267) Data node should shutdown when a "critical" error is returned by the name node
Date Thu, 01 Jun 2006 02:40:29 GMT
Data node should shutdown when a "critical" error is returned by the name node
------------------------------------------------------------------------------

         Key: HADOOP-267
         URL: http://issues.apache.org/jira/browse/HADOOP-267
     Project: Hadoop
        Type: Bug

  Components: dfs  
    Reporter: Konstantin Shvachko
    Priority: Minor


Currently data node does not distinguish between critical and non critical exceptions.
Any exception is treated as a signal to sleep and then try again. See
org.apache.hadoop.dfs.DataNode.run()
This is happening because RPC always throws the same RemoteException.
In some cases (like UnregisteredDatanodeException, IncorrectVersionException) the data 
node should shutdown rather than retry.
This logic naturally belongs to the 
org.apache.hadoop.dfs.DataNode.offerService()
but can be reasonably implemented (without examining the RemoteException.className 
field) after HADOOP-266 (2) is fixed.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira


Mime
View raw message