hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Vinay (JIRA)" <j...@apache.org>
Subject [jira] [Created] (HDFS-5299) DFS client hangs in updatePipeline RPC when failover happened
Date Fri, 04 Oct 2013 05:15:42 GMT
Vinay created HDFS-5299:
---------------------------

             Summary: DFS client hangs in updatePipeline RPC when failover happened
                 Key: HDFS-5299
                 URL: https://issues.apache.org/jira/browse/HDFS-5299
             Project: Hadoop HDFS
          Issue Type: Bug
          Components: namenode
    Affects Versions: 2.1.0-beta, 3.0.0
            Reporter: Vinay
            Assignee: Vinay
            Priority: Blocker


DFSClient got hanged in updatedPipeline call to namenode when the failover happened at exactly
sametime.


When we digged down, issue found to be with handling the RetryCache in updatePipeline.

Here are the steps :
1. Client was writing slowly.
2. One of the datanode was down and updatePipeline was called to ANN.
3. Call reached the ANN, while processing updatePipeline call it got shutdown.
3. Now Client retried (Since the api marked as AtMostOnce) to another NameNode. at that time
still NN was in STANDBY. and got StandbyException.
4. Now one more time client failover happened. 
5. Now SNN became Active.
6. Client called to current ANN again for updatePipeline, 

Now client call got hanged in NN, waiting for the cached call with same callid to be over.
But this cached call is already got over last time with StandbyException.

Conclusion :
Always whenever the new entry is added to cache we need to update the result of the call before
returning the call or throwing exception.
I can see similar issue multiple RPCs in FSNameSystem.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

Mime
View raw message