hadoop-common-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Chris Nauroth (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (HADOOP-10747) Support configurable retries on SASL connection failures in RPC client.
Date Tue, 24 Jun 2014 19:36:24 GMT

     [ https://issues.apache.org/jira/browse/HADOOP-10747?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel

Chris Nauroth updated HADOOP-10747:

    Attachment: HADOOP-10747.2.patch

Haohui and Nicholas, thank you for taking a look.  I agree that this can be a private property.
 I'm attaching patch v2.  This removes the core-default.xml change and defines the property
in {{CommonConfigurationKeys}} instead of {{CommonConfigurationKeysPublic}}.

bq. Should it use Connection.connectionRetryPolicy?

The code treats these SASL failures differently from connection failures, so I prefer to isolate
these retries behind a different configuration instead of the retry policy.  Reusing the retry
policy might have some unintended side effects.  For example, if someone increases their connection
retries, they might not expect that this could cause an increased connection load on their
KDC if the client decides it needs to relogin to get a TGT.

> Support configurable retries on SASL connection failures in RPC client.
> -----------------------------------------------------------------------
>                 Key: HADOOP-10747
>                 URL: https://issues.apache.org/jira/browse/HADOOP-10747
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: ipc
>    Affects Versions: 3.0.0, 2.4.0
>            Reporter: Chris Nauroth
>            Assignee: Chris Nauroth
>         Attachments: HADOOP-10747.1.patch, HADOOP-10747.2.patch
> The RPC client includes a retry loop around SASL connection failures.  Currently, this
is hard-coded to a maximum of 5 retries.  Let's make this configurable.

This message was sent by Atlassian JIRA

View raw message