hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Devaraj Das (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-1331) Multiple entries for 'dfs.client.buffer.dir'
Date Mon, 07 May 2007 08:22:15 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-1331?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12493991
] 

Devaraj Das commented on HADOOP-1331:
-------------------------------------

The DFSClient already uses the Configuration.getLocalPath API that will allocate a directory
(corresponding to the drive) based on the hash of the pathname. So, yes, all the drives will
be utilized (subject to the hash function's return value). But HADOOP-1252 can improve this
situation IMO, and the DFSClient should use the new APIs provided there.

> Multiple entries for 'dfs.client.buffer.dir'
> --------------------------------------------
>
>                 Key: HADOOP-1331
>                 URL: https://issues.apache.org/jira/browse/HADOOP-1331
>             Project: Hadoop
>          Issue Type: Improvement
>          Components: dfs
>            Reporter: Koji Noguchi
>            Priority: Minor
>
> If the (DFS) client host has multiple drives, I'd like the different dfs  -put calls
to utilize these drives. 
> Also, 
>  - It might be helpful when we have multiple reducers writing to dfs. 
>  - If we want datanode/tasktracker to skip dead drive, we probably need this?

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message