hadoop-hdfs-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Harsh J <ha...@cloudera.com>
Subject Re: Clarifications on excludedNodeList in DFSClient
Date Tue, 20 Nov 2012 18:19:18 GMT
The excludeNode list is initialized for each output stream created
under a DFSClient instance. That is, it is empty for every new
FS.create() returned DFSOutputStream initially and is maintained
separately for each file created under a common DFSClient.

However, this could indeed be a problem for a long-running single-file
client, which I assume is a continuously alive and hflush()-ing one.

Can you search for and file a JIRA to address this with any discussion
taken there? Please put up your thoughts there as well.

On Mon, Nov 19, 2012 at 3:25 PM, Inder Pall <inder.pall@gmail.com> wrote:
> Folks,
> i was wondering if there is any mechanism/logic to move a node back from the
> excludedNodeList to live nodes to be tried for new block creation.
> In the current DFSClient code i do not see this. The use-case is if the
> write timeout is being reduced and certain nodes get aggressively added to
> the excludedNodeList and the client caches DFSClient then the excludedNodes
> never get tried again in the lifetime of the application caching DFSClient
> --
> - Inder
> "You are average of the 5 people you spend the most time with"

Harsh J

View raw message