hadoop-common-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Pradeep Nayak Udupi Kadbet (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HADOOP-12345) Credential length in CredentialsSys.java incorrect
Date Thu, 23 Jun 2016 17:35:16 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-12345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15346835#comment-15346835

Pradeep Nayak Udupi Kadbet commented on HADOOP-12345:

Andrew - 

You would need a NFS-server to do the nfs server level test. I can point to the RFC where
this is mentioned and comment on what is the error with which the NFS server would respond.
Would that suffice ? 

I will separate out the changes for HADOOP-11823 and make that as a separate fix. I will try
adding a unit test for that as well 

> Credential length in CredentialsSys.java incorrect
> --------------------------------------------------
>                 Key: HADOOP-12345
>                 URL: https://issues.apache.org/jira/browse/HADOOP-12345
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: nfs
>    Affects Versions: 2.6.0, 2.7.0
>            Reporter: Pradeep Nayak Udupi Kadbet
>            Priority: Critical
>         Attachments: HADOOP-12345.patch
> Hi -
> There is a bug in the way hadoop-nfs sets the credential length in "Credentials" field
of the NFS RPC packet when using AUTH_SYS
> In CredentialsSys.java, when we are writing the creds in to XDR object, we set the length
as follows:
>  // mStamp + mHostName.length + mHostName + mUID + mGID + mAuxGIDs.count
> 96     mCredentialsLength = 20 + mHostName.getBytes().length;
> (20 corresponds to 4 bytes for mStamp, 4 bytes for mUID, 4 bytes for mGID, 4 bytes for
length field of hostname, 4 bytes for number of aux 4 gids) and this is okay.
> However when we add the length of the hostname to this, we are not adding the extra padded
bytes for the hostname (If the length is not a multiple of 4) and thus when the NFS server
reads the packet, it returns GARBAGE_ARGS because it doesn't read the uid field when it is
expected to read. I can reproduce this issue constantly on machines where the hostname length
is not a multiple of 4.
> A possible fix is to do something this:
> int pad = mHostName.getBytes().length % 4;
>  // mStamp + mHostName.length + mHostName + mUID + mGID + mAuxGIDs.count
> mCredentialsLength = 20 + mHostName.getBytes().length + pad;
> I would be happy to submit the patch but I need some help to commit into mainline. I
haven't committed into Hadoop yet.
> Cheers!
> Pradeep

This message was sent by Atlassian JIRA

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org

View raw message