hadoop-common-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Hadoop QA (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HADOOP-12345) Credential length in CredentialsSys.java incorrect
Date Wed, 29 Jun 2016 02:06:28 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-12345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15354214#comment-15354214
] 

Hadoop QA commented on HADOOP-12345:
------------------------------------

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 27s{color} | {color:blue}
Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  0s{color} |
{color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m  0s{color}
| {color:green} The patch appears to include 1 new or modified test files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 58s{color}
| {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 26s{color} |
{color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 12s{color}
| {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 17s{color} |
{color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 11s{color}
| {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 22s{color} |
{color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 14s{color} |
{color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 12s{color}
| {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 25s{color} |
{color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 25s{color} | {color:green}
the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  0m 12s{color}
| {color:orange} hadoop-common-project/hadoop-nfs: The patch generated 1 new + 11 unchanged
- 0 fixed = 12 total (was 11) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 20s{color} |
{color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 13s{color}
| {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  0s{color} | {color:red}
The patch has 7 line(s) that end in whitespace. Use git apply --whitespace=fix. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 37s{color} |
{color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 16s{color} |
{color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 26s{color} | {color:green}
hadoop-nfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 22s{color}
| {color:green} The patch does not generate ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 27m 39s{color} | {color:black}
{color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:85209cc |
| JIRA Issue | HADOOP-12345 |
| GITHUB PR | https://github.com/apache/hadoop/pull/104 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  unit  findbugs
 checkstyle  |
| uname | Linux 8bf9e45b11d3 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12
UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 77031a9 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| checkstyle | https://builds.apache.org/job/PreCommit-HADOOP-Build/9895/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-nfs.txt
|
| whitespace | https://builds.apache.org/job/PreCommit-HADOOP-Build/9895/artifact/patchprocess/whitespace-eol.txt
|
|  Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/9895/testReport/ |
| modules | C: hadoop-common-project/hadoop-nfs U: hadoop-common-project/hadoop-nfs |
| Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/9895/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Credential length in CredentialsSys.java incorrect
> --------------------------------------------------
>
>                 Key: HADOOP-12345
>                 URL: https://issues.apache.org/jira/browse/HADOOP-12345
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: nfs
>    Affects Versions: 2.6.0, 2.7.0
>            Reporter: Pradeep Nayak Udupi Kadbet
>            Assignee: Pradeep Nayak Udupi Kadbet
>            Priority: Critical
>         Attachments: HADOOP-12345.001.patch, HADOOP-12345.patch
>
>
> Hi -
> There is a bug in the way hadoop-nfs sets the credential length in "Credentials" field
of the NFS RPC packet when using AUTH_SYS
> In CredentialsSys.java, when we are writing the creds in to XDR object, we set the length
as follows:
>  // mStamp + mHostName.length + mHostName + mUID + mGID + mAuxGIDs.count
> 96     mCredentialsLength = 20 + mHostName.getBytes().length;
> (20 corresponds to 4 bytes for mStamp, 4 bytes for mUID, 4 bytes for mGID, 4 bytes for
length field of hostname, 4 bytes for number of aux 4 gids) and this is okay.
> However when we add the length of the hostname to this, we are not adding the extra padded
bytes for the hostname (If the length is not a multiple of 4) and thus when the NFS server
reads the packet, it returns GARBAGE_ARGS because it doesn't read the uid field when it is
expected to read. I can reproduce this issue constantly on machines where the hostname length
is not a multiple of 4.
> A possible fix is to do something this:
> int pad = mHostName.getBytes().length % 4;
>  // mStamp + mHostName.length + mHostName + mUID + mGID + mAuxGIDs.count
> mCredentialsLength = 20 + mHostName.getBytes().length + pad;
> I would be happy to submit the patch but I need some help to commit into mainline. I
haven't committed into Hadoop yet.
> Cheers!
> Pradeep



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Mime
View raw message