hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Robert Chansler (JIRA)" <j...@apache.org>
Subject [jira] Updated: (HADOOP-3132) DFS writes stuck occationally
Date Fri, 11 Apr 2008 00:26:05 GMT

     [ https://issues.apache.org/jira/browse/HADOOP-3132?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Robert Chansler updated HADOOP-3132:
------------------------------------

      Description: 

This problem happens in 0.17 trunk

As reported in hadoop-3124,
I saw reducers waited 10 minutes for writing data to dfs and got timeout.
The client retries again and timeouted after another 19 minutes.

During the period of write stuck, all the nodes in the data node pipeline were functioning
fine.
The system load was normal.
I don't believe this was due to slow network cards/disk drives or overloaded machines.
I believe this and hadoop-3033 are related somehow.


  was:


This problem happens in 0.17 trunk

As reported in hadoop-3124,
I saw reducers waited 10 minutes for writing data to dfs and got timeout.
The client retries again and timeouted after another 19 minutes.

During the period of write stuck, all the nodes in the data node pipeline were functioning
fine.
The system load was normal.
I don't believe this was due to slow network cards/disk drives or overloaded machines.
I believe this and hadoop-3033 are related somehow.


         Priority: Blocker  (was: Major)
    Fix Version/s: 0.17.0
         Assignee: Raghu Angadi

> DFS writes stuck occationally
> -----------------------------
>
>                 Key: HADOOP-3132
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3132
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs
>            Reporter: Runping Qi
>            Assignee: Raghu Angadi
>            Priority: Blocker
>             Fix For: 0.17.0
>
>
> This problem happens in 0.17 trunk
> As reported in hadoop-3124,
> I saw reducers waited 10 minutes for writing data to dfs and got timeout.
> The client retries again and timeouted after another 19 minutes.
> During the period of write stuck, all the nodes in the data node pipeline were functioning
fine.
> The system load was normal.
> I don't believe this was due to slow network cards/disk drives or overloaded machines.
> I believe this and hadoop-3033 are related somehow.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message