hadoop-common-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Steve Loughran (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HADOOP-11584) s3a file block size set to 0 in getFileStatus
Date Sat, 21 Feb 2015 12:09:12 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-11584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14330166#comment-14330166

Steve Loughran commented on HADOOP-11584:

Based on own review and tests of others: +1, committing.

Some of the code in the patch is mine, so I feel I should justify why I am making that vote
myself, i.e. am I tainted.

# the initial patch did actually fix the bug, so I could have +1'd it there and then
# except I wanted tests, which started in -002.patch
# The -003 patch, which does involve me, moved the tests into a new class, tested two new
corner cases (consistency between stat and ls; blocksize of root).
# I was placed to do that as the author of the contract test code and being set up to run
s3 tests
# it's been reviewed and tested by others, including S3 US

If anyone feels I shouldn't be committing patch 3, they are free to revert it and take my
+1 as a vote for the "no-tests-fix-bugs" patch 001.

> s3a file block size set to 0 in getFileStatus
> ---------------------------------------------
>                 Key: HADOOP-11584
>                 URL: https://issues.apache.org/jira/browse/HADOOP-11584
>             Project: Hadoop Common
>          Issue Type: Sub-task
>          Components: fs/s3
>    Affects Versions: 2.6.0
>            Reporter: Dan Hecht
>            Assignee: Brahma Reddy Battula
>            Priority: Blocker
>         Attachments: HADOOP-10584-003.patch, HADOOP-111584.patch, HADOOP-11584-002.patch
> The consequence is that mapreduce probably is not splitting s3a files in the expected
way. This is similar to HADOOP-5861 (which was for s3n, though s3n was passing 5G rather than
0 for block size).
> FileInputFormat.getSplits() relies on the FileStatus block size being set:
> {code}
>         if (isSplitable(job, path)) {
>           long blockSize = file.getBlockSize();
>           long splitSize = computeSplitSize(blockSize, minSize, maxSize);
> {code}
> However, S3AFileSystem does not set the FileStatus block size field. From S3AFileStatus.java:
> {code}
>   // Files
>   public S3AFileStatus(long length, long modification_time, Path path) {
>     super(length, false, 1, 0, modification_time, path);
>     isEmptyDirectory = false;
>   }
> {code}
> I think it should use S3AFileSystem.getDefaultBlockSize() for each file's block size
(where it's currently passing 0).

This message was sent by Atlassian JIRA

View raw message