bahir-reviews mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From ckadner <>
Subject [GitHub] bahir issue #28: [BAHIR-75] [WIP] Remote HDFS connector for Apache Spark usi...
Date Mon, 23 Jan 2017 23:36:10 GMT
Github user ckadner commented on the issue:
    A few high-level questions before jumping into more detailed code review:
    Can you elaborate on differences/limitations/advantages over Hadoop default "webhdfs"
scheme? i.e.
    - the main problem you are working around it that the Hadoop WebHdfsFileSystem discards
Knox gateway path when creating Http URL (principal motivation for this connector) which makes
it impossible to use it with Knox
    - the Hadoop WebHdfsFileSystem implements additional interfaces like:
       - `DelegationTokenRenewer.Renewable`
       - `TokenAspect.TokenManagementDelegator`
    - performance differences between your approach vs Hadoop's _RemoteFS_ and _WebHDFS_
    Some configuration parameters are specific to remote servers that should be specified
by server not on connector level (some at server level may override connector level), i.e.
    - Server level:
      - gateway path (assuming one Knox gateway per server)
      - user name and password
      - authentication method (think Kerberos etc)
    - Connector level:
      - certificate validation options (maybe overridden by server level props)
      - trustStore path
      - webhdfs protocol version (maybe overridden by server level props)
      - buffer sizes and file chunk sizes retry intervals etc
    Given that users need to know about the remote Hadoop server configuration (security,
gateway path, etc) for WebHDFS access would it be nicer if ...
     - users could separately configure server specific properties in a config file or registry
     - and then in Spark jobs only use <server>:<port>/<resourcePath> without
having to provide additional properties
    - what authentication methods are supported besides basic auth (i.e. OAuth, Kerberos,
    - should the connector manage auth tokens, token renewal, etc
    - I don't think the connector should create a truststore, either skip certificate validation
or take a user provided truststore path (btw, the current code fails to create a truststore
on Mac OS X)
    - the code should have logging at INFO, DEGUG, ERROR levels using the Spark logging mechanisms
(targeting the Spark log files)
    The outstanding unit tests should verify that the connector works with a ...
    - standard Hadoop cluster (unsecured)
    - Hadoop clusters secured by Apache Knox
    - Hadoop clusters secured by other mechanisms like Kerberos

If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at or file a JIRA ticket
with INFRA.

View raw message