hadoop-common-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Gopal V (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HADOOP-11867) FS API: Add a high-performance vectored Read to FSDataInputStream API
Date Wed, 22 Apr 2015 18:08:00 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-11867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14507569#comment-14507569
] 

Gopal V commented on HADOOP-11867:
----------------------------------

bq. {{openAt(Path, offset)}}

That is a good idea, because in general the seeks are unavoidable in the FileSplit<offset:len>
 - open + seek to offset immediately.

bq. is it an error if I ask for overlapping ranges?

I think that should be enforced, since that is not only hard to translate into fadvise/equivalent
(with page alignment/sloppy reads etc), but is effectively wasted bandwidth and CPU fetching
data twice across the wire for the stub implementation.

Agree on 2 & 3, while 1 is a normal IOException - this has the slight disadvantage that
the buffers need to be allocated upfront & the API does not return till all the buffers
are full.

But that is in effect the trade-off this represents over the regular seek/read combo, making
all the reads ahead of time into a processing buffer vs making them one at a time.

> FS API: Add a high-performance vectored Read to FSDataInputStream API
> ---------------------------------------------------------------------
>
>                 Key: HADOOP-11867
>                 URL: https://issues.apache.org/jira/browse/HADOOP-11867
>             Project: Hadoop Common
>          Issue Type: New Feature
>    Affects Versions: 2.8.0
>            Reporter: Gopal V
>
> The most significant way to read from a filesystem in an efficient way is to let the
FileSystem implementation handle the seek behaviour underneath the API to be the most efficient
as possible.
> A better approach to the seek problem is to provide a sequence of read locations as part
of a single call, while letting the system schedule/plan the reads ahead of time.
> This is exceedingly useful for seek-heavy readers on HDFS, since this allows for potentially
optimizing away the seek-gaps within the FSDataInputStream implementation.
> For seek+read systems with even more latency than locally-attached disks, something like
a {{readFully(long[] offsets, ByteBuffer[] chunks)}} would take of the seeks internally while
reading chunk.remaining() bytes into each chunk (which may be {{slice()}}ed off a bigger buffer).
> The base implementation can stub in this as a sequence of seeks + read() into ByteBuffers,
without forcing each FS implementation to override this in any way.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message