hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Steve Loughran (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-5499) Provide way to throttle per FileSystem read/write bandwidth
Date Wed, 13 Nov 2013 10:53:34 GMT

    [ https://issues.apache.org/jira/browse/HDFS-5499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13821185#comment-13821185
] 

Steve Loughran commented on HDFS-5499:
--------------------------------------

I've looked at it a bit within the context of YARN.

YARN containers are where this would be ideal, as then you'd be able to request IO capacity
as well as CPU and RAM. For that to work, the throttling would have to be outside the App,
as you are trying to limit code whether or not it wants to be, and because you probably want
to give it more bandwidth if the system is otherwise idle. Self-throttling doesn't pick up
spare IO

* you can use cgroups in YARN to throttle local disk IO through the file:// URLs or the java
filesystem APIs -such as for MR temp data
* you can't c-group throttle HDFS per YARN container, which would be the ideal use case for
it. The IO is taking place in the DN, and cgroups only limits IO in the throttled process
group.
* implementing it in the DN would require a lot more complex code there to prioritise work
based on block ID (sole identifier that goes around everywhere) or input source (local sockets
for HBase IO vs TCP stack)
* One you go to a heterogenous filesystem you need to think about IO load per storage layer
as well as/alongside per-volume
* There's also generic RPC request throttle to prevent DoS against the NN and other HDFS services.
That would need to be server side, but once implemented in the RPC code be universal. 

You also need to define what is the load you are trying to throttle, pure RPCs/second, read
bandwidth, write bandwidth, seeks or IOPs. Once a file is lined up for sequential reading,
you'd almost want it to stream through the next blocks until a high priority request came
through, but operations like a seek which would involve a disk head movement backwards would
be something to throttle (hence you need to be storage type aware as SSD seeks costs less).
You also need to consider that although the cost of writes is high, it's usually being done
with the goal of preserving data -and you don't want to impact durability.


> Provide way to throttle per FileSystem read/write bandwidth
> -----------------------------------------------------------
>
>                 Key: HDFS-5499
>                 URL: https://issues.apache.org/jira/browse/HDFS-5499
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>            Reporter: Lohit Vijayarenu
>         Attachments: HDFS-5499.1.patch
>
>
> In some cases it might be worth to throttle read/writer bandwidth on per JVM basis so
that clients do not spawn too many thread and start data movement causing other JVMs to starve.
Ability to throttle read/write bandwidth on per FileSystem would help avoid such issues. 
> Challenge seems to be how well this can be fit into FileSystem code. If one enables throttling
around FileSystem APIs, then any hidden data transfer within cluster using them might also
be affected. Eg. copying job jar during job submission, localizing resources for distributed
cache and such. 



--
This message was sent by Atlassian JIRA
(v6.1#6144)

Mime
View raw message