hadoop-common-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Steve Loughran (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HADOOP-14535) Support for random access and seek of block blobs
Date Mon, 26 Jun 2017 12:23:01 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-14535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16063006#comment-16063006
] 

Steve Loughran commented on HADOOP-14535:
-----------------------------------------

Having delved into the Azure codebase, I think a test could be fitted into {{TestReadAndSeekPageBlobAfterWrite}},
hopefully just by re-using the file generated. Is that the same kind of blob you want to work
with?

I don't see any uses of readFully() in that test BTW. Rather than seek/read sequences, a sequence
of readFully() operations is more representative of column store access. Doing something there
to mimic seek-near-end and then some near start would match that and line up for any other
optimisations of readFully

FWIW, here's a trace of some TCP-DS benchmark IO:

https://raw.githubusercontent.com/rajeshbalamohan/hadoop-aws-wrapper/master/stream_access_query_27_tpcds_200gb.log

a like like
{code}
.../000098_0,readFully,17113131,0,0,17111727,342,44181435
{code}
means "in file 000098_0 17113131 readFully(offset=17111727, bytes=342) duration = 44,181,435
nS 

That's the seek pattern that this optimisation is clearly targeting, the regression we need
to avoid is "byte 0 to EOF", which is what .gz processing involves.

I'll set up some of my downstream tests in https://github.com/hortonworks-spark/cloud-integration
to do this in spark & going from .gz to ORC & parquet and then scanning; as this uses
the actual libraries, it's a full integration test of the seek() code

> Support for random access and seek of block blobs
> -------------------------------------------------
>
>                 Key: HADOOP-14535
>                 URL: https://issues.apache.org/jira/browse/HADOOP-14535
>             Project: Hadoop Common
>          Issue Type: Improvement
>          Components: fs/azure
>            Reporter: Thomas
>            Assignee: Thomas
>         Attachments: 0001-Random-access-and-seek-imporvements-to-azure-file-system.patch,
0003-Random-access-and-seek-imporvements-to-azure-file-system.patch, 0004-Random-access-and-seek-imporvements-to-azure-file-system.patch
>
>
> This change adds a seek-able stream for reading block blobs to the wasb:// file system.
> If seek() is not used or if only forward seek() is used, the behavior of read() is unchanged.
> That is, the stream is optimized for sequential reads by reading chunks (over the network)
in
> the size specified by "fs.azure.read.request.size" (default is 4 megabytes).
> If reverse seek() is used, the behavior of read() changes in favor of reading the actual
number
> of bytes requested in the call to read(), with some constraints.  If the size requested
is smaller
> than 16 kilobytes and cannot be satisfied by the internal buffer, the network read will
be 16
> kilobytes.  If the size requested is greater than 4 megabytes, it will be satisfied by
sequential
> 4 megabyte reads over the network.
> This change improves the performance of FSInputStream.seek() by not closing and re-opening
the
> stream, which for block blobs also involves a network operation to read the blob metadata.
Now
> NativeAzureFsInputStream.seek() checks if the stream is seek-able and moves the read
position.
> [^attachment-name.zip]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Mime
View raw message