hadoop-common-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Steve Loughran (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (HADOOP-14943) Add common getFileBlockLocations() emulation for object stores, including S3A
Date Thu, 16 Nov 2017 20:27:00 GMT

     [ https://issues.apache.org/jira/browse/HADOOP-14943?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Steve Loughran updated HADOOP-14943:
------------------------------------
    Attachment: HADOOP-14943-002.patch

HADOOP-14943 patch 002
* Adds new StoreUtils class in hadoop-common, intended to be home for utils to help object
stores
* Move code in NativeAzureFileSystem to work out block locations into it
* Add unit tests in hadoop common
* Fix range problem in the copied code (HADOOP-15044)
* Wasb: move to new implementation
* S3A: implement getFileBlockLocations(); extend TestS3AInputStreamPerformance, as it is expected
to have a test file > the block size of the FS.

With the shared code there's less stuff to test and maintain, easier for other implementations
to adopt.

Testing: S3A ireland (s3guard/auth => 5:44 test run) and WASB ireland

> Add common getFileBlockLocations() emulation for object stores, including S3A
> -----------------------------------------------------------------------------
>
>                 Key: HADOOP-14943
>                 URL: https://issues.apache.org/jira/browse/HADOOP-14943
>             Project: Hadoop Common
>          Issue Type: Sub-task
>          Components: fs/s3
>    Affects Versions: 2.8.1
>            Reporter: Steve Loughran
>            Assignee: Steve Loughran
>            Priority: Critical
>         Attachments: HADOOP-14943-001.patch, HADOOP-14943-002.patch
>
>
> It looks suspiciously like S3A isn't providing the partitioning data needed in {{listLocatedStatus}}
and {{getFileBlockLocations()}} needed to break up a file by the blocksize. This will stop
tools using the MRv1 APIS doing the partitioning properly if the input format isn't doing
it own split logic.
> FileInputFormat in MRv2 is a bit more configurable about input split calculation &
will split up large files. but otherwise, the partitioning is being done more by the default
values of the executing engine, rather than any config data from the filesystem about what
its "block size" is,
> NativeAzureFS does a better job; maybe that could be factored out to hadoop-common and
reused?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Mime
View raw message