hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Christian Kunz (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HDFS-732) HDFS files are ending up truncated
Date Wed, 28 Oct 2009 04:58:59 GMT

    [ https://issues.apache.org/jira/browse/HDFS-732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12770785#action_12770785
] 

Christian Kunz commented on HDFS-732:
-------------------------------------

I am still not convinced that everything is okay. Even when the close call fails the DFS client
does not go by itself and has to continue to provide consistent results.

Our client application called hdfsCloseFile of libhdfs in 2 attempts, the second one was successful,
as mentioned at the end of the first comment.

When looking at source code of hadoop-0.18.3, hadoop-0.20.1, and trunk, I see different behaviors
of the close function in DFSOutputStream:

hadoop-0.18.3:
close() calls closeInternal() which throws an exception when aborted previously.

hadoop-0.20.1:
if(closed) return;
always returns okay when closed, even when aborted previously.

trunk:
if (closed) { IOException e = lastException; if (e == null) return; else throw e;}

hadoop-0.18.3 and trunk are acceptable, but in hadoop-0.20.1, when a client tries to close
a file twice, it will always be successful on the second attempt, even when aborted previously.
This is inconsistent.

Besides that, what is the purpose of recovering a file aborted during close? What is a use
case for that?



> HDFS files are ending up truncated
> ----------------------------------
>
>                 Key: HDFS-732
>                 URL: https://issues.apache.org/jira/browse/HDFS-732
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: name-node
>    Affects Versions: 0.20.1
>            Reporter: Christian Kunz
>
> We recently started to use hadoop-0.20.1 in our production environment (less than 2 weeks
ago) and already had 3 instances of truncated files, more than we had for months using hadoop-0.18.3.
> Writing is done using libhdfs, although it rather seems to be a problem on the server
side.
> I will post some relevant logs (they are too large to be put into the description)
>  

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message