hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Raghu Angadi (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-3758) Excessive exceptions in HDFS namenode log file
Date Mon, 14 Jul 2008 22:24:31 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-3758?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12613472#action_12613472
] 

Raghu Angadi commented on HADOOP-3758:
--------------------------------------

Thats pretty painful. We should include "IncorrectVersionException" as one of the fatal exceptions
at the datanode.

See {{DataNode.java:offserService()}} :
{noformat}
      } catch(RemoteException re) {
        String reClass = re.getClassName();
        if (UnregisteredDatanodeException.class.getName().equals(reClass) ||
            DisallowedDatanodeException.class.getName().equals(reClass)) {
          LOG.warn("DataNode is shutting down: " + 
                   StringUtils.stringifyException(re));
          shutdown();
          return;
        }
{noformat}



> Excessive exceptions in HDFS namenode log file
> ----------------------------------------------
>
>                 Key: HADOOP-3758
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3758
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs
>    Affects Versions: 0.17.1
>            Reporter: Jim Huang
>
> I upgraded a big cluster, out of which 10 nodes did not get upgraded.  
> The namenode log showed excessive exceptions, causing the namenode log to ate the entire
partition space, in this case close to 700GB log file was generated on the namenode.  

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message