hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Dhruba Borthakur <dhr...@gmail.com>
Subject Re: short-circuiting HDFS reads
Date Sat, 14 Feb 2009 05:45:08 GMT
Hi folks,

This is a very interesting discussion. We have been considering divvy-ing up
a set of "old" netapp storage among a few number of nodes that run HDFS.
This HDFS could be used to archive rarely used data from our production
cluster. These netapp-storage could actually be mounted on all HDFS client
machines, and it would be nice if there was a short-circuit protocol to make
HDFS clients write directory to the block file.

This is helful in the scenario when there is shared storage that can be
accessed directly by datanodes as well as hdfs client. I understand this
exposes problems with security and such.


On Fri, Feb 13, 2009 at 2:39 PM, Sanjay Radia <sradia@yahoo-inc.com> wrote:

> On Jan 8, 2009, at 10:13 AM, George Porter wrote:
>  Hi Jun,
>> The earlier responses to your email reference the JIRA that I opened
>> about this issue.  Short-circuiting the primary HDFS datapath does
>> improve throughput, and the amount depends on your workload (random
>> reads especially).  Some initial experimental results are posted to that
>> JIRA.  A second advantage is that since the JVM hosting the HDFS client
>> is doing the reading, the O/S will satisfy future disk requests from the
>> cache, which isn't really possible when you read over the network (even
>> to another JVM on the same host).
>> There are several real disadvantages, the largest of which include 1) it
>> adds a new datapath, and 2) bypasses various security and auditing
>> features of HDFS.
>>  We are in middle of adding security to HDFS.
> Having the client read the blocks directly would violate security. Security
> is a specially thorny problem to solve in this case.
> Further the internal structure and hence the path name of the file are not
> visible outside.
> One could consider hacking this (ignoring security) but even this gets
> tricky as the directory in which the block is saved may change if
> some one starts to write to the file (which  can happen with the recent
>  append work),
> Interesting optimization but tricky to do in a clean way (at least not
> obvious to me).
> sanjay
>  I would certainly like to think through a more clean
>> interface for achieving this goal, especially since reading local data
>> should be the common case.  Any thoughts you might have would be
>> appreciated.
>> Thanks,
>> George
>> Jun Rao wrote:
>> > Hi,
>> >
>> > Today, HDFS always reads through a socket even when the data is local to
>> > the client. This adds a lot of overhead, especially for warm reads. It
>> > should be possible for a dfs client to test if a block to be read is
>> local
>> > and if so, bypass socket and read through local FS api directly. This
>> > should improve random access performance significantly (e.g., for
>> HBase).
>> > Has this been considered in HDFS? Thanks,
>> >
>> > Jun
>> >
>> >

  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message