hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Tsz Wo (Nicholas), SZE (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-4762) Provide HDFS based NFSv3 and Mountd implementation
Date Tue, 02 Jul 2013 03:33:24 GMT

    [ https://issues.apache.org/jira/browse/HDFS-4762?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13697469#comment-13697469
] 

Tsz Wo (Nicholas), SZE commented on HDFS-4762:
----------------------------------------------

- OffsetRange.compareTo(..) only compares min.  It considers two ranges are equal if they
have the same min but different max values.  However, OpenFileCtx.pendingWrites uses OffsetRange
as a key type.  Is it possible to have two writes with the same min?  In such case, the writes
will be considered as equal and checkRepeatedWriteRequest(..) may be incorrect.  If it is
possible to have two writes with the same min, I suggest also comparing the max, i.e.
{code}
  private static int compareTo(long left, long right) {
    if (left < right) {
      return -1;
    } else if (left > right) {
      return 1;
    } else {
      return 0;
    }
  }
  
  @Override
  public int compareTo(OffsetRange other) {
    final int d = compareTo(min, other.getMin());
    return d != 0? d: compareTo(max, other.getMax());
  }
{code}
BTW, the comment above OffsetRange.compareTo(..) is invalid.

- In OpenFileCtx.checkDump(..), 
{code}
      try {
        if (dumpFile.exists()) {
          throw new RuntimeException("The dump file should not exist:"
              + dumpFilePath);
        }
        dumpOut = new FileOutputStream(dumpFile);
        if (dumpFile.createNewFile()) {
          LOG.error("Can't create dump file:" + dumpFilePath);
        }
      } catch (IOException e) {
        LOG.error("Got failure when creating dump stream " + dumpFilePath
            + " with error:" + e);
        enabledDump = false;
        if (dumpOut != null) {
          try {
            dumpOut.close();
          } catch (IOException e1) {
            LOG.error("Can't close dump stream " + dumpFilePath
                + " with error:" + e);
          }
        }
        return;
      }
{code}
-* The second if-statement should be "if (!dumpFile.createNewFile())".  Also, createNewFile()
ensures that the file does not exist.  So the first if-statement may not be needed.
-* Use "IOUtils.cleanup(LOG, dumpOut);" to close dumpOut.
-* Is it okay to return when there is an exception?  Should it re-throws the exception? 

- WriteManager.shutdownAsyncDataService() is not used.

                
> Provide HDFS based NFSv3 and Mountd implementation
> --------------------------------------------------
>
>                 Key: HDFS-4762
>                 URL: https://issues.apache.org/jira/browse/HDFS-4762
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>    Affects Versions: 3.0.0
>            Reporter: Brandon Li
>            Assignee: Brandon Li
>         Attachments: HDFS-4762.patch, HDFS-4762.patch.2, HDFS-4762.patch.3, HDFS-4762.patch.3,
HDFS-4762.patch.4, HDFS-4762.patch.5
>
>
> This is to track the implementation of NFSv3 to HDFS.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Mime
View raw message