hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Konstantin Shvachko (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-2078) Name-node should be able to close empty files.
Date Fri, 19 Oct 2007 20:52:51 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-2078?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12536341

Konstantin Shvachko commented on HADOOP-2078:

testZeroSizeFile() does not fail because DFSClient writes 0 bytes into the file before closing.
When the output stream buffer is empty DFSClient .endBlock() allocates one block and sends
0 bytes to 2 data-nodes. 
The data-nodes create one empty data-file, and one not empty meta-file each and then report
to the name-node that the block have been received.
So empty file is represented by one block of size 0.

In the case you describe fileBlocks == null and pendingFile != null. So condition 
fileBlocks == null || pendingFile == null
is equivalent to
fileBlocks == null
They are not equivalent only if fileBlocks != null and pendingFile == null, which never happens.
Because no file means no blocks.

> Name-node should be able to close empty files.
> ----------------------------------------------
>                 Key: HADOOP-2078
>                 URL: https://issues.apache.org/jira/browse/HADOOP-2078
>             Project: Hadoop
>          Issue Type: Bug
>    Affects Versions: 0.15.0
>            Reporter: Konstantin Shvachko
>            Assignee: Konstantin Shvachko
>             Fix For: 0.16.0
>         Attachments: emptyClose.patch
> When I try to close an empty file, the name-node throws an exception "Could not complete
write to file" 
> and issues a warning "NameSystem.completeFile: failed to complete".
> I don't see any reason why empty files should not be allowed.

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

View raw message