hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Lohit Vijayarenu (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (HDFS-5499) Provide way to throttle per FileSystem read/write bandwidth
Date Mon, 11 Nov 2013 22:22:17 GMT

     [ https://issues.apache.org/jira/browse/HDFS-5499?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel

Lohit Vijayarenu updated HDFS-5499:

    Attachment: HDFS-5499.1.patch

We have been thinking something along this line in the patch, which can throttle HDFS read/write
and Hftp reads. Posting here for any inputs from people who might have thought about this
use case and what they think about this.

> Provide way to throttle per FileSystem read/write bandwidth
> -----------------------------------------------------------
>                 Key: HDFS-5499
>                 URL: https://issues.apache.org/jira/browse/HDFS-5499
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>            Reporter: Lohit Vijayarenu
>         Attachments: HDFS-5499.1.patch
> In some cases it might be worth to throttle read/writer bandwidth on per JVM basis so
that clients do not spawn too many thread and start data movement causing other JVMs to starve.
Ability to throttle read/write bandwidth on per FileSystem would help avoid such issues. 
> Challenge seems to be how well this can be fit into FileSystem code. If one enables throttling
around FileSystem APIs, then any hidden data transfer within cluster using them might also
be affected. Eg. copying job jar during job submission, localizing resources for distributed
cache and such. 

This message was sent by Atlassian JIRA

View raw message