hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Runping Qi (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-3132) DFS writes stuck occationally
Date Mon, 31 Mar 2008 05:14:24 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-3132?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12583535#action_12583535

Runping Qi commented on HADOOP-3132:

Random writer should be able to re-produce this problem.

Try  a map only job with 500 mappers, each writing 4GB data.

> DFS writes stuck occationally
> -----------------------------
>                 Key: HADOOP-3132
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3132
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs
>            Reporter: Runping Qi
> This problem happens in 0.17 trunk
> As reported in hadoop-3124,
> I saw reducers waited 10 minutes for writing data to dfs and got timeout.
> The client retries again and timeouted after another 19 minutes.
> During the period of write stuck, all the nodes in the data node pipeline were functioning
> The system load was normal.
> I don't believe this was due to slow network cards/disk drives or overloaded machines.
> I believe this and hadoop-3033 are related somehow.

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

View raw message