hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Raghu Angadi (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-3033) Datanode fails write to DFS file with exception message "Trying to change block file offset"
Date Tue, 18 Mar 2008 01:04:24 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-3033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12579693#action_12579693
] 

Raghu Angadi commented on HADOOP-3033:
--------------------------------------

I think we should still explain how that can lead to what we saw :

One particular case I looked at shows one datanode does not write 64k of data (or overwrites
last 64k):

The last (third) data node in the pipeline failed with : {noformat}
2008-03-17 20:38:01,928 INFO org.apache.hadoop.dfs.DataNode: Changing block file offset of
block blk_7114623733442731588 from 85983232 to 86048768 meta file offset to 672263
2008-03-17 20:38:01,928 INFO org.apache.hadoop.dfs.DataNode: Exception in receiveBlock for
block blk_7114623733442731588 java.io.IOException: Trying to change block file offset of block
blk_7114623733442731588 to 86048768 but actual size of file is 85983232
{noformat}
The client retried with remaining DNs and succeded.

Say 'x' == 85983232.

Block file in tmp dir on bad datanode is x bytes long and meta data file is 672263 bytes long.

Data from failed datanode and a good datanode for this block show that data till x-64k matches
on both. 64k at  at x-64k on bad datanode matches 64k at x at the good datanode. The meta
data file data matches on both side. So this show the bad datanode either some did not write
the last packet or overwrote last but one packet with the last packet. Each packet has 64k
of real data.


> Datanode fails write to DFS file with exception message "Trying to change block file
offset"
> --------------------------------------------------------------------------------------------
>
>                 Key: HADOOP-3033
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3033
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs
>    Affects Versions: 0.16.1
>            Reporter: dhruba borthakur
>            Assignee: dhruba borthakur
>         Attachments: badnode.patch
>
>
> A write to a DFS block failed with the lastdatanode in the pipeline reporting this error:
> Receiving block blk_-7279084187433655573 src: /xx.xx.xx.xx:xx dest: /xx.xx.xx.xx:50010
> Changing block file offset of block blk_-7279084187433655573 from 9043968 to 9043968
meta file offset to 70663
> Changing block file offset of block blk_-7279084187433655573 from 111935488 to 112001024
meta file offset to 875015
> Exception in receiveBlock for block blk_-7279084187433655573 java.io.IOException: Trying
to change block file offset of block blk_-7279084187433655573 to 112001024 but actual size
of file is 111935488
> PacketResponder 0 for block blk_-7279084187433655573 Interrupted.
> PacketResponder 0 for block blk_-7279084187433655573 terminating
> writeBlock blk_-7279084187433655573 received exception java.io.IOException: Trying to
change block file offset of block blk_-7279084187433655573 to 112001024 but actual size of
file is 111935488
> DataXceiver: java.io.IOException: Trying to change block file offset of block blk_-7279084187433655573
to 112001024 but actual size of file is 111935488

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message