hadoop-common-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Steve Loughran (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HADOOP-11584) s3a file block size set to 0 in getFileStatus
Date Sat, 21 Feb 2015 11:02:13 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-11584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14330126#comment-14330126
] 

Steve Loughran commented on HADOOP-11584:
-----------------------------------------

S3A blocksize is a subclass of the Hadoop 2.5+ fs contract tests, where the pattern is fs.contract.test.fs.${filesystemuri}
== test FS URI. the other one is just a copy of the s3n naming policy.

you can use variables in your XML test resources to share the values

> s3a file block size set to 0 in getFileStatus
> ---------------------------------------------
>
>                 Key: HADOOP-11584
>                 URL: https://issues.apache.org/jira/browse/HADOOP-11584
>             Project: Hadoop Common
>          Issue Type: Sub-task
>          Components: fs/s3
>    Affects Versions: 2.6.0
>            Reporter: Dan Hecht
>            Assignee: Brahma Reddy Battula
>            Priority: Blocker
>         Attachments: HADOOP-10584-003.patch, HADOOP-111584.patch, HADOOP-11584-002.patch
>
>
> The consequence is that mapreduce probably is not splitting s3a files in the expected
way. This is similar to HADOOP-5861 (which was for s3n, though s3n was passing 5G rather than
0 for block size).
> FileInputFormat.getSplits() relies on the FileStatus block size being set:
> {code}
>         if (isSplitable(job, path)) {
>           long blockSize = file.getBlockSize();
>           long splitSize = computeSplitSize(blockSize, minSize, maxSize);
> {code}
> However, S3AFileSystem does not set the FileStatus block size field. From S3AFileStatus.java:
> {code}
>   // Files
>   public S3AFileStatus(long length, long modification_time, Path path) {
>     super(length, false, 1, 0, modification_time, path);
>     isEmptyDirectory = false;
>   }
> {code}
> I think it should use S3AFileSystem.getDefaultBlockSize() for each file's block size
(where it's currently passing 0).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message