hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Konstantin Shvachko (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-2655) Copy on write for data and metadata files in the presence of snapshots
Date Wed, 20 Feb 2008 22:25:43 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-2655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12570856#action_12570856

Konstantin Shvachko commented on HADOOP-2655:

# DatanodeBlockInfo.getVol() rename to getVolume().
# org.apache.hadoop.fs.*  is imported twice.
# FSDataset.detachFile(v, f, b)
#- It is better to use /** JavaDoc comments */
#- I would place this method into DatanodeBlockInfo.detachFile(b)
# FSDataset.detachBlock().
#- Should @inheritDoc from FSDatasetInterface.
#- When do we need numLinks parameter?
#- The whole part below the comment starting with
// If this block does not have a linked snapshot,
should be a method of DatanodeBlockInfo.detachBlock()
# In general I'd reorganize methods and place most of the functionality related to copy-on-write

into DatanodeBlockInfo rather than exposing it in the high level class like FSDataset.
# I have a doubt that detachBlock() should be a public interface of FSDatasetInterface.
My understanding is that detachBlock() is intended to be used in append(), and is going to
be specific
for real data-nodes, because simulated data-nodes do not have any data that require to be
copied on write.
I think it should be just a method of FSDataset and TestAppend should directly call it because
it is testing the real data-nodes rather than the simulated ones.
# FSVolume.createTmpFile()
#- should probably be a static method.
#- There is an old System.out in it. Could you please remove it.
#- The whole try catch statement seams to be redundant here.
# FSDataset.replaceFile().
#- We are using the same code for renames because of the Windows semantics.
And this is yet another variant of that code. I think we should introduce
a FSUtil.replaceFile() method and call it where appropriate.
#- Are you sure we should wait 5 seconds before failing to replace?
Do we expect something to change during that period?
# HardLink.getLinkCount -> getLinkCountCommand;
 and you can set it once outside the switch{} because it is the same for both OSs.
# TestFileAppend has 3 warnings:
#- import org.apache.hadoop.dfs.FSConstants.DatanodeReportType;
#- private void checkFile() is never used.
#- long len = fs.getFileStatus(file1).getLen(); is never used.
#- We should systematically use /** create file ... */ instead of // create file ... for method
title comments.

> Copy on write for data and metadata files in the presence of snapshots
> ----------------------------------------------------------------------
>                 Key: HADOOP-2655
>                 URL: https://issues.apache.org/jira/browse/HADOOP-2655
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs
>            Reporter: dhruba borthakur
>            Assignee: dhruba borthakur
>         Attachments: copyOnWrite.patch, copyOnWrite.patch
> If a DFS Client wants to append data to an existing file (appends, HADOOP-1700) and a
snapshot is present, the Datanoed has to implement some form of a copy-on-write for writes
to data and meta data files.

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

View raw message