hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Yongjun Zhang (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-6361) TestIdUserGroup.testUserUpdateSetting failed due to out of range nfsnobody Id
Date Mon, 12 May 2014 19:13:16 GMT

    [ https://issues.apache.org/jira/browse/HDFS-6361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13995489#comment-13995489

Yongjun Zhang commented on HDFS-6361:

Hi Colin,

Thanks a lot for reviewing! Please see below my explanation of things:
What is it that we have yet to understand? We better figure it out before we commit this code.
Yes, I on-purposely put the comments there for a review discussion. What I meant was, Nfs3Util.getFileAttr()
called by lots of places will get "-2" for  "4294967294", what yet to be understood is, if
"-2" is returned instead of  4294967294, whether all consumers of Nfs3Util.getFileAttr() will
be happy, because of the diff between int and unsigned int. That said, I did run all regression
before posting the patch, it seems to be fine. I actually hope the change is fine and the
concern can be removed.
I don't see a reason to special-case -2
So far we only see this special number, and I'm not aware of any other special numbers that
need to be treated specially, because ID is supposed to be positive. If we generalize it here,
then we might not catch it right away when someone create a new "out-of-range" number. What
do you think?
The Linux kernel itself doesn't use 64-bit numbers for UIDs. I'm not aware of any kernels
which do. So I don't see a benefit to this.
I agree.

Thanks again!


> TestIdUserGroup.testUserUpdateSetting failed due to out of range nfsnobody Id
> -----------------------------------------------------------------------------
>                 Key: HDFS-6361
>                 URL: https://issues.apache.org/jira/browse/HDFS-6361
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: nfs
>    Affects Versions: 2.4.0
>            Reporter: Yongjun Zhang
>            Assignee: Yongjun Zhang
>         Attachments: HDFS-6361.001.patch
> The following error happens pretty often:
> org.apache.hadoop.nfs.nfs3.TestIdUserGroup.testUserUpdateSetting
> Failing for the past 1 build (Since Unstable#61 )
> Took 0.1 sec.
> add description
> Error Message
> For input string: "4294967294"
> Stacktrace
> java.lang.NumberFormatException: For input string: "4294967294"
> 	at java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
> 	at java.lang.Integer.parseInt(Integer.java:495)
> 	at java.lang.Integer.valueOf(Integer.java:582)
> 	at org.apache.hadoop.nfs.nfs3.IdUserGroup.updateMapInternal(IdUserGroup.java:137)
> 	at org.apache.hadoop.nfs.nfs3.IdUserGroup.updateMaps(IdUserGroup.java:188)
> 	at org.apache.hadoop.nfs.nfs3.IdUserGroup.<init>(IdUserGroup.java:60)
> 	at org.apache.hadoop.nfs.nfs3.TestIdUserGroup.testUserUpdateSetting(TestIdUserGroup.java:71)
> Standard Output
> log4j:WARN No appenders could be found for logger (org.apache.hadoop.nfs.nfs3.IdUserGroup).
> log4j:WARN Please initialize the log4j system properly.
> log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.

This message was sent by Atlassian JIRA

View raw message