hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Hairong Kuang (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HDFS-732) HDFS files are ending up truncated
Date Wed, 28 Oct 2009 20:16:59 GMT

    [ https://issues.apache.org/jira/browse/HDFS-732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12771092#action_12771092
] 

Hairong Kuang commented on HDFS-732:
------------------------------------

> First attempt to close the file was unsuccessful, but second attempt was successful (but
with truncated size).
I do not think this is true. In most of cases, a dfs client is not able to close a file if
it failed to push data to datanodes because all replicas are left in under construction state.

As I said in my yesterday's comment, it is the NameNode that closed the file. The following
NameNode log
{noformat}
2009-10-23 21:16:00,397 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
commitBlockSynchronization(blk_6703874482275767879_76840999) successful
2009-10-23 22:16:02,159 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* blk_6703874482275767879_76840999
recovery
started, primary=xxx.yyy.zzz.44:uuu10
2009-10-23 22:16:02,925 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
commitBlockSynchronization(lastblock=blk_6703874482275767879_76840999, newgenerationstamp=76888761,
newlength=17825792,
newtargets=[xxx.yyy.zzz.44:uuu10], closeFile=true, deleteBlock=false)
{noformat}
shows that after the dfs client died (around 21:16), its lease expired after 1 hour (around
22:16). So NameNode initiated recovery and then closed the file. 


> HDFS files are ending up truncated
> ----------------------------------
>
>                 Key: HDFS-732
>                 URL: https://issues.apache.org/jira/browse/HDFS-732
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: name-node
>    Affects Versions: 0.20.1
>            Reporter: Christian Kunz
>
> We recently started to use hadoop-0.20.1 in our production environment (less than 2 weeks
ago) and already had 3 instances of truncated files, more than we had for months using hadoop-0.18.3.
> Writing is done using libhdfs, although it rather seems to be a problem on the server
side.
> I will post some relevant logs (they are too large to be put into the description)
>  

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message