Return-Path: X-Original-To: apmail-hadoop-hdfs-dev-archive@minotaur.apache.org Delivered-To: apmail-hadoop-hdfs-dev-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 6930E1007A for ; Mon, 11 Nov 2013 19:00:19 +0000 (UTC) Received: (qmail 10801 invoked by uid 500); 11 Nov 2013 19:00:18 -0000 Delivered-To: apmail-hadoop-hdfs-dev-archive@hadoop.apache.org Received: (qmail 10721 invoked by uid 500); 11 Nov 2013 19:00:18 -0000 Mailing-List: contact hdfs-dev-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: hdfs-dev@hadoop.apache.org Delivered-To: mailing list hdfs-dev@hadoop.apache.org Received: (qmail 10713 invoked by uid 99); 11 Nov 2013 19:00:18 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 11 Nov 2013 19:00:18 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of lohit.vijayarenu@gmail.com designates 209.85.219.41 as permitted sender) Received: from [209.85.219.41] (HELO mail-oa0-f41.google.com) (209.85.219.41) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 11 Nov 2013 19:00:13 +0000 Received: by mail-oa0-f41.google.com with SMTP id g12so4181923oah.14 for ; Mon, 11 Nov 2013 10:59:52 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:date:message-id:subject:from:to:content-type; bh=5fmBze8YnOcmY8UqpXZMdR3cVuYI7TE2LF4pXvRq0nA=; b=QArMO/emFp78WjHQ9qiNOorVJWO0uqAXmlO8pAzo6jAKg1VW4+gPG7ZzCt5rj/Fw60 dFi+TaMvsBM/PXjjq1I9goRDEWsffAheoLSztyL7R1PXYnRSVpV5Cfo84dYMQ8Yo1V99 WrbGsnUrRab8aQhsuDr4e2Glnf1c7FqGbcQ479NUTP7YQJ8kQg8HP9pJqpSPXG+qThJ9 8EPvNZAftol7PYPSJXu5XrcsQdySXrnO3qzMEwhAM/kmAcAeLoO5hahGv7UHBej27/j3 xXa5z+8A+UeV0ttcpQM8bAFDvd7tCjYg8IM3tBkx3A87g8xyAev7Q8TVEEwlJL1YeT5B 09UA== MIME-Version: 1.0 X-Received: by 10.60.40.34 with SMTP id u2mr2493623oek.91.1384196392612; Mon, 11 Nov 2013 10:59:52 -0800 (PST) Received: by 10.76.168.131 with HTTP; Mon, 11 Nov 2013 10:59:52 -0800 (PST) Date: Mon, 11 Nov 2013 10:59:52 -0800 Message-ID: Subject: HDFS read/write data throttling From: lohit To: hdfs-dev@hadoop.apache.org Content-Type: multipart/alternative; boundary=089e0149ce5c47fd2304eaeb560d X-Virus-Checked: Checked by ClamAV on apache.org --089e0149ce5c47fd2304eaeb560d Content-Type: text/plain; charset=UTF-8 Hello Devs, Wanted to reach out and see if anyone has thought about ability to throttle data transfer within HDFS. One option we have been thinking is to throttle on a per FileSystem basis, similar to Statistics in FileSystem. This would mean anyone with handle to HDFS/Hftp will be throttled globally within JVM. Right value to come up for this would be based on type of hardware we use and how many tasks/clients we allow. On the other hand doing something like this at FileSystem layer would mean many other tasks such as Job jar copy, DistributedCache copy and any hidden data movement would also be throttled. We wanted to know if anyone has had such requirement on their clusters in the past and what was the thinking around it. Appreciate your inputs/comments -- Have a Nice Day! Lohit --089e0149ce5c47fd2304eaeb560d--