hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Raghu Angadi (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-4517) unstable dfs when running jobs on 0.18.1
Date Fri, 24 Oct 2008 19:57:44 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-4517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12642528#action_12642528
] 

Raghu Angadi commented on HADOOP-4517:
--------------------------------------

When was the jstack you attached taken? It looks like following thread is blocking most of
the other threads :
{nofomat}
"IPC Server handler 8 on 50020" daemon prio=10 tid=0x085ca800 nid=0x6a81 in Object.wait()
[0xb0d76000..0xb0d76e20]
   java.lang.Thread.State: WAITING (on object monitor)
        at java.lang.Object.wait(Native Method)
        at java.lang.Thread.join(Thread.java:1143)
        - locked <0xb8be7008> (a org.apache.hadoop.util.Daemon)
        at java.lang.Thread.join(Thread.java:1196)
        at org.apache.hadoop.dfs.FSDataset.interruptOngoingCreates(FSDataset.java:777)
        - locked <0xb64e9c20> (a org.apache.hadoop.dfs.FSDataset)
        at org.apache.hadoop.dfs.FSDataset.updateBlock(FSDataset.java:795)
        - locked <0xb64e9c20> (a org.apache.hadoop.dfs.FSDataset)
        at org.apache.hadoop.dfs.DataNode.updateBlock(DataNode.java:3106)
        at org.apache.hadoop.dfs.LeaseManager.syncBlock(LeaseManager.java:503)
        at org.apache.hadoop.dfs.LeaseManager.recoverBlock(LeaseManager.java:471)
        at org.apache.hadoop.dfs.DataNode.recoverBlock(DataNode.java:3134)
        at sun.reflect.GeneratedMethodAccessor3.invoke(Unknown Source)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
        at java.lang.reflect.Method.invoke(Method.java:597)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:452)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:888)
{nofomat}

Not sure if this is the cause or a side effect. I wonder why there are so many blocks under
recovery.

> unstable dfs when running jobs on 0.18.1
> ----------------------------------------
>
>                 Key: HADOOP-4517
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4517
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs
>    Affects Versions: 0.18.1
>         Environment: hadoop-0.18.1 plus patches HADOOP-4277 HADOOP-4271 HADOOP-4326 HADOOP-4314
HADOOP-3914 HADOOP-4318 HADOOP-4351 HADOOP-4395
>            Reporter: Christian Kunz
>         Attachments: datanode.out
>
>
> 2 attempts of a job using 6000 maps, 1900 reduces
> 1.st attempt: failed during reduce phase after 22 hours with 31 dead datanodes most of
which became unresponsive due to an exception; dfs lost blocks
> 2nd attempt: failed during map phase after 5 hours with 5 dead datanodes due to exception;
dfs lost blocks responsible for job failure.
> I will post typical datanode exception and attach thread dump.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message