hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Raghu Angadi (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HDFS-732) HDFS files are ending up truncated
Date Wed, 28 Oct 2009 17:37:59 GMT

    [ https://issues.apache.org/jira/browse/HDFS-732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12771012#action_12771012

Raghu Angadi commented on HDFS-732:

0.20 seems to be setting 'closed' to true inside a finally. It would be better to fix the
behaviour to be equivalent to 0.21.

That said, I think contract of close is not the real issue here. Why isn't error from first
close() not treated as hard error? 

bq. Even when the close call fails, DFS client does not go by itself and has to continue to
provide consistent results. 
Do you mean DFS client does not serve other streams properly after this error?

bq. Besides that, what is the purpose of recovering a file aborted during close? What is a
use case for that?
This changed quite some time back. This is the normal expected behaviour of most filesystems.
A user's process or machine might die in the middle of writing and there is no use of throwing
the data that is already written away.

Christian, do you expect the actual error on datanodes while writing? I would be concerned
about pipeline error detection whenever I see failure on all the three datanodes. Multiple
bugs were fixed in this area. Please include any stacktrace around the messages in datanode
logs (third datanode log would be very useful, but looks like you were not able to recover

partial data recovered after such a failure is as expected. I agree, it would be better to
make second invocation of close() return error as well and it would be good practice for app
not to ignore error from the first close().

> HDFS files are ending up truncated
> ----------------------------------
>                 Key: HDFS-732
>                 URL: https://issues.apache.org/jira/browse/HDFS-732
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: name-node
>    Affects Versions: 0.20.1
>            Reporter: Christian Kunz
> We recently started to use hadoop-0.20.1 in our production environment (less than 2 weeks
ago) and already had 3 instances of truncated files, more than we had for months using hadoop-0.18.3.
> Writing is done using libhdfs, although it rather seems to be a problem on the server
> I will post some relevant logs (they are too large to be put into the description)

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

View raw message