hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Hairong Kuang (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-1232) Datanode did not get removed from blockMap when a datanode was down
Date Tue, 10 Apr 2007 22:26:32 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-1232?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12487941
] 

Hairong Kuang commented on HADOOP-1232:
---------------------------------------

The datanode persistently stayed in the blockmap after it got shuted down.

> Datanode did not get removed from blockMap when a datanode was down
> -------------------------------------------------------------------
>
>                 Key: HADOOP-1232
>                 URL: https://issues.apache.org/jira/browse/HADOOP-1232
>             Project: Hadoop
>          Issue Type: Bug
>          Components: dfs
>    Affects Versions: 0.12.3
>            Reporter: Hairong Kuang
>             Fix For: 0.13.0
>
>
> After  a datanode shuted down, the following exception was thrown when a job tried to
open a file with blocks on the data node. It looks that the datanode was removed from NetworkTopology
but not from the blockMap.
> org.apache.hadoop.ipc.RemoteException: java.io.IOException: java.lang.IllegalArgumentException:
Unexpected non-existing data node: /xxx/yyy:50010
>         at org.apache.hadoop.net.NetworkTopology.checkArgument(NetworkTopology.java:379)
>         at org.apache.hadoop.net.NetworkTopology.getDistance(NetworkTopology.java:396)
>         at org.apache.hadoop.dfs.FSNamesystem$ReplicationTargetChooser$1.compare(FSNamesystem.java:3161)
>         at org.apache.hadoop.dfs.FSNamesystem$ReplicationTargetChooser$1.compare(FSNamesystem.java:3160)
>         at java.util.Arrays.mergeSort(Arrays.java:1270)
>         at java.util.Arrays.sort(Arrays.java:1210)
>         at java.util.Collections.sort(Collections.java:159)
>         at org.apache.hadoop.dfs.FSNamesystem$ReplicationTargetChooser.sortByDistance(FSNamesystem.java:3159)
>         at org.apache.hadoop.dfs.FSNamesystem.open(FSNamesystem.java:549)
>         at org.apache.hadoop.dfs.NameNode.open(NameNode.java:250)
>         at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>         at java.lang.reflect.Method.invoke(Method.java:597)
>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:336)
>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:559)
>  
>         at org.apache.hadoop.ipc.Client.call(Client.java:471)
>         at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:163)
>         at org.apache.hadoop.dfs.$Proxy1.open(Unknown Source)
>         at org.apache.hadoop.dfs.DFSClient$DFSInputStream.openInfo(DFSClient.java:511)
>         at org.apache.hadoop.dfs.DFSClient$DFSInputStream.(DFSClient.java:498)
>         at org.apache.hadoop.dfs.DFSClient.open(DFSClient.java:207)
>         at org.apache.hadoop.dfs.DistributedFileSystem$RawDistributedFileSystem.open(DistributedFileSystem.java:129)
>         at org.apache.hadoop.fs.ChecksumFileSystem$FSInputChecker.(ChecksumFileSystem.java:110)
>         at org.apache.hadoop.fs.ChecksumFileSystem.open(ChecksumFileSystem.java:330)
>         at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:245)
>         at org.apache.hadoop.mapred.TextInputFormat.getRecordReader(TextInputFormat.java:54)
>         at org.apache.hadoop.mapred.MapTask.run(MapTask.java:139)
>         at org.apache.hadoop.mapred.TaskTracker$Child.main(TaskTracker.java:1445)

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message