hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Weiwei Yang (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (HDFS-14207) ZKFC should catch exception when ha configuration missing
Date Tue, 22 Jan 2019 04:08:00 GMT

     [ https://issues.apache.org/jira/browse/HDFS-14207?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Weiwei Yang updated HDFS-14207:
-------------------------------
       Resolution: Fixed
     Hadoop Flags: Reviewed
    Fix Version/s: 3.3.0
           Status: Resolved  (was: Patch Available)

> ZKFC should catch exception when ha configuration missing
> ---------------------------------------------------------
>
>                 Key: HDFS-14207
>                 URL: https://issues.apache.org/jira/browse/HDFS-14207
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: hdfs
>    Affects Versions: 3.1.1, 3.0.3
>            Reporter: Fei Hui
>            Assignee: Fei Hui
>            Priority: Major
>             Fix For: 3.3.0
>
>         Attachments: HDFS-14207.001.patch
>
>
> When i test hdfs zkfc with wrong configurations , i can not start zkfc process, and did
not find any errors in log except command errors as bellow
> {quote}
> ERROR: Cannot set priority of zkfc process 59556
> {quote}
> Debug zkfc and deep into the code, i find that zkfc exit because of HadoopIllegalArgumentException.
I think we should catch this exception and log it.
> Throwing HadoopIllegalArgumentException code is as follow
> {code:java}
>   public static DFSZKFailoverController create(Configuration conf) {
>     Configuration localNNConf = DFSHAAdmin.addSecurityConfiguration(conf);
>     String nsId = DFSUtil.getNamenodeNameServiceId(conf);
>     if (!HAUtil.isHAEnabled(localNNConf, nsId)) {
>       throw new HadoopIllegalArgumentException(
>           "HA is not enabled for this namenode.");
>     }
>     String nnId = HAUtil.getNameNodeId(localNNConf, nsId);
>     if (nnId == null) {
>       String msg = "Could not get the namenode ID of this node. " +
>           "You may run zkfc on the node other than namenode.";
>       throw new HadoopIllegalArgumentException(msg);
>     }
>     NameNode.initializeGenericKeys(localNNConf, nsId, nnId);
>     DFSUtil.setGenericConf(localNNConf, nsId, nnId, ZKFC_CONF_KEYS);
>     
>     NNHAServiceTarget localTarget = new NNHAServiceTarget(
>         localNNConf, nsId, nnId);
>     return new DFSZKFailoverController(localNNConf, localTarget);
>   }
> {code}
> In DFSZKFailoverController main function, we do not catch it and not log it
> {code:java}
>  public static void main(String args[])
>       throws Exception {
>     StringUtils.startupShutdownMessage(DFSZKFailoverController.class,
>         args, LOG);
>     if (DFSUtil.parseHelpArgument(args, 
>         ZKFailoverController.USAGE, System.out, true)) {
>       System.exit(0);
>     }
>     
>     GenericOptionsParser parser = new GenericOptionsParser(
>         new HdfsConfiguration(), args);
>     DFSZKFailoverController zkfc = DFSZKFailoverController.create(
>         parser.getConfiguration());
>     try {
>       System.exit(zkfc.run(parser.getRemainingArgs()));
>     } catch (Throwable t) {
>       LOG.error("DFSZKFailOverController exiting due to earlier exception "
>           + t);
>       terminate(1, t);
>     }
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org


Mime
View raw message