hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Chris Nauroth (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (HDFS-7073) Allow falling back to a non-SASL connection on DataTransferProtocol in several edge cases.
Date Tue, 16 Sep 2014 19:20:35 GMT

     [ https://issues.apache.org/jira/browse/HDFS-7073?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel

Chris Nauroth updated HDFS-7073:
    Attachment: HDFS-7073.1.patch

I'm attaching the patch.  Summary:
# {{SaslDataTransferClient/Server}}: Remove checks that had been enforcing requirement of
SASL configuration.  This had been too strict to support use of {{ignore.secure.ports.for.testing}}.
 Additionally, the client piece has new logic to support fallback when the cluster is unsecured
but using block access tokens.  I think this is an unusual configuration, so I expect this
code path would be executed only very rarely.
# {{DFSOutputStream}}: The new client fallback logic also needed some coordination at this
layer.  If the client attempts a connection with SASL to a non-SASL DataNode, then the DataNode
closes the socket after rejecting the unexpected message.  The coordination here in the {{DFSOutputStream}}
reconnect loop ensures that we get another chance with an open socket.
# {{DataNode}}: There had been some mishandling in {{checkSecureConfig}} around checking the
{{dfs.data.tranfser.protection}} property.  It's defined in hdfs-default.xml, so it always
comes in with empty string as the default (not null).  I changed some of this logic to check
for empty string instead of null.
# {{TestSaslDataTransfer}}: Tests have been updated for better coverage of {{DateNode#checkSecureConfig}}.
 I also took the opportunity to apply a timeout.

> Allow falling back to a non-SASL connection on DataTransferProtocol in several edge cases.
> ------------------------------------------------------------------------------------------
>                 Key: HDFS-7073
>                 URL: https://issues.apache.org/jira/browse/HDFS-7073
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: datanode, hdfs-client, security
>            Reporter: Chris Nauroth
>            Assignee: Chris Nauroth
>         Attachments: HDFS-7073.1.patch
> HDFS-2856 implemented general SASL support on DataTransferProtocol.  Part of that work
also included a fallback mode in case the remote cluster is running under a different configuration
without SASL.  I've discovered a few edge case configurations that this did not support:
> * Cluster is unsecured, but has block access tokens enabled.  This is not something I've
seen done in practice, but I've heard historically it has been allowed.  The HDFS-2856 code
relied on seeing an empty block access token to trigger fallback, and this doesn't work if
the unsecured cluster actually is using block access tokens.
> * The DataNode has an unpublicized testing configuration property that could be used
to skip the privileged port check.  However, the HDFS-2856 code is still enforcing requirement
of SASL when the ports are not privileged, so this would force existing configurations to
make changes to activate SASL.
> This patch will restore the old behavior so that these edge case configurations will
continue to work the same way.

This message was sent by Atlassian JIRA

View raw message