hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Stefan Groschupf ...@media-style.com>
Subject Re: "lost" NDFS blocks following network reorg
Date Mon, 27 Mar 2006 00:48:17 GMT
Hi hadoop developers,

I moved this discussion to the hadoop developer list since it is may  
more reponsable to this problem than the nutch users mailing list.

I spend some time to read code and find some interesting things.

The local name of the data node is  machineName + ":" + tmpPort.  So  
it can change if the port is blocked or the machine name change.
May we should create the datanode only once and write it to the data  
folder to be able read it later on.(?)

This local name is used to send block reports to the name node.  
FSNamesystem#processReport(Block newReport[], UTF8 dataNodeLocalName)  
process this report.
In the first line of this method the DatanodeInfo is loaded by the  
dataNode's localName. The datanode already is in this map since a  
heart beat is send before a block report.
So:
   DatanodeInfo node = (DatanodeInfo) datanodeMap.get(name);  // no  
problem but just a 'empty' container:
...
   Block oldReport[] = node.getBlocks(); // will return null since no  
Blocks are yet associated with this node.

Since oldReport is null all code is skipped until line 901. But this  
only adds the blocks to the node container.

In line 924 begins a section of code that collects all obsolete  
blocks. First of all I wondering why we iterate throw all blocks  
here, this could be expansice and it would be enough to iterate over  
all blocks that are reported by this datanode, isn't it?
If a block is still valid is tested by FSDirectory#isValidBlock that  
checks if the block is in activeBlocks.
The problem I see now is that the only method that adds Blocks t  
activeBlocks is unprotectedAddFile(UTF8 name, Block blocks[]). But  
here also the name node local name that may changed is involved.
This method is also used to load the state of stopped or crashed name  
node.
So in case you stop the dfs, change host names a set of blocks will  
be marked as obsolete and deleted.

Writing a test case for this behavior is very difficult since it  
involve a change of the machine name.

Makes my observation sense or do I had overseen a detail and the  
problem Ken describe is caused by a other problem?
In any case I suggest to make the data node name persistent so incase  
port or host-name change the name node will not handle the same  
datanode as a new one.

Stefan





Am 26.03.2006 um 23:11 schrieb Doug Cutting:

> Ken Krugler wrote:
>> Anyway, curious if anybody has insights here. We've done a fair  
>> amount of poking around, to no avail. I don't think there's any  
>> way to get the blocks back, as they definitely seem to be gone,  
>> and file recovery on Linux seems pretty iffy. I'm mostly  
>> interested in figuring out if this is a known issue ("Of course  
>> you can't change the server names and expect it to work"), or  
>> whether it's a symptom of lurking NDFS bugs.
>
> It's hard to tell, after the fact, whether stuff like this is pilot  
> error or a bug.  Others have reported similar things, so it's  
> either a bug or it's too easy to make pilot errors.  So something  
> needs to change.  But what?
>
> We need to start testing stuff like this systematically.  A  
> reproducible test case would make this much easier to diagnose.
>
> I'm sorry I can't be more helpful.  I'm sorry you lost data.
>
> Doug
>

---------------------------------------------
blog: http://www.find23.org
company: http://www.media-style.com



Mime
View raw message